Test Report: KVM_Linux_crio 20602

                    
                      a90248a4a931d52b681e38138304d5427e54b74a:2025-04-07:39037
                    
                

Test fail (12/322)

x
+
TestAddons/parallel/Ingress (153.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-660533 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-660533 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-660533 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [721b8242-08d2-4e4b-b477-33911134cbdd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [721b8242-08d2-4e4b-b477-33911134cbdd] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.005664209s
I0407 12:18:04.837263 1169716 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-660533 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-660533 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m8.593479009s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-660533 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-660533 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.112
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-660533 -n addons-660533
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-660533 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-660533 logs -n 25: (1.405353878s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-170875                                                                     | download-only-170875 | jenkins | v1.35.0 | 07 Apr 25 12:13 UTC | 07 Apr 25 12:13 UTC |
	| delete  | -p download-only-577957                                                                     | download-only-577957 | jenkins | v1.35.0 | 07 Apr 25 12:13 UTC | 07 Apr 25 12:13 UTC |
	| delete  | -p download-only-170875                                                                     | download-only-170875 | jenkins | v1.35.0 | 07 Apr 25 12:13 UTC | 07 Apr 25 12:13 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-592205 | jenkins | v1.35.0 | 07 Apr 25 12:13 UTC |                     |
	|         | binary-mirror-592205                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:33293                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-592205                                                                     | binary-mirror-592205 | jenkins | v1.35.0 | 07 Apr 25 12:13 UTC | 07 Apr 25 12:13 UTC |
	| addons  | disable dashboard -p                                                                        | addons-660533        | jenkins | v1.35.0 | 07 Apr 25 12:13 UTC |                     |
	|         | addons-660533                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-660533        | jenkins | v1.35.0 | 07 Apr 25 12:13 UTC |                     |
	|         | addons-660533                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-660533 --wait=true                                                                | addons-660533        | jenkins | v1.35.0 | 07 Apr 25 12:13 UTC | 07 Apr 25 12:17 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-660533 addons disable                                                                | addons-660533        | jenkins | v1.35.0 | 07 Apr 25 12:17 UTC | 07 Apr 25 12:17 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-660533 addons disable                                                                | addons-660533        | jenkins | v1.35.0 | 07 Apr 25 12:17 UTC | 07 Apr 25 12:17 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-660533        | jenkins | v1.35.0 | 07 Apr 25 12:17 UTC | 07 Apr 25 12:17 UTC |
	|         | -p addons-660533                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-660533 addons                                                                        | addons-660533        | jenkins | v1.35.0 | 07 Apr 25 12:17 UTC | 07 Apr 25 12:17 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-660533 addons disable                                                                | addons-660533        | jenkins | v1.35.0 | 07 Apr 25 12:17 UTC | 07 Apr 25 12:17 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-660533 addons disable                                                                | addons-660533        | jenkins | v1.35.0 | 07 Apr 25 12:17 UTC | 07 Apr 25 12:17 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-660533 ip                                                                            | addons-660533        | jenkins | v1.35.0 | 07 Apr 25 12:17 UTC | 07 Apr 25 12:17 UTC |
	| addons  | addons-660533 addons disable                                                                | addons-660533        | jenkins | v1.35.0 | 07 Apr 25 12:17 UTC | 07 Apr 25 12:17 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-660533 addons                                                                        | addons-660533        | jenkins | v1.35.0 | 07 Apr 25 12:17 UTC | 07 Apr 25 12:17 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-660533 addons                                                                        | addons-660533        | jenkins | v1.35.0 | 07 Apr 25 12:17 UTC | 07 Apr 25 12:17 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-660533 addons                                                                        | addons-660533        | jenkins | v1.35.0 | 07 Apr 25 12:17 UTC | 07 Apr 25 12:18 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-660533 ssh cat                                                                       | addons-660533        | jenkins | v1.35.0 | 07 Apr 25 12:18 UTC | 07 Apr 25 12:18 UTC |
	|         | /opt/local-path-provisioner/pvc-9c6cc59a-2996-4d4d-8cfd-22882b3ed36f_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-660533 addons disable                                                                | addons-660533        | jenkins | v1.35.0 | 07 Apr 25 12:18 UTC | 07 Apr 25 12:18 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-660533 ssh curl -s                                                                   | addons-660533        | jenkins | v1.35.0 | 07 Apr 25 12:18 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-660533 addons                                                                        | addons-660533        | jenkins | v1.35.0 | 07 Apr 25 12:18 UTC | 07 Apr 25 12:18 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-660533 addons                                                                        | addons-660533        | jenkins | v1.35.0 | 07 Apr 25 12:18 UTC | 07 Apr 25 12:18 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-660533 ip                                                                            | addons-660533        | jenkins | v1.35.0 | 07 Apr 25 12:20 UTC | 07 Apr 25 12:20 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 12:13:41
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 12:13:41.660563 1170330 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:13:41.660876 1170330 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:13:41.660889 1170330 out.go:358] Setting ErrFile to fd 2...
	I0407 12:13:41.660893 1170330 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:13:41.661121 1170330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1162386/.minikube/bin
	I0407 12:13:41.661920 1170330 out.go:352] Setting JSON to false
	I0407 12:13:41.664293 1170330 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":14166,"bootTime":1744013856,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 12:13:41.664607 1170330 start.go:139] virtualization: kvm guest
	I0407 12:13:41.667176 1170330 out.go:177] * [addons-660533] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 12:13:41.669028 1170330 notify.go:220] Checking for updates...
	I0407 12:13:41.669091 1170330 out.go:177]   - MINIKUBE_LOCATION=20602
	I0407 12:13:41.670808 1170330 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:13:41.672691 1170330 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20602-1162386/kubeconfig
	I0407 12:13:41.674367 1170330 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1162386/.minikube
	I0407 12:13:41.676273 1170330 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 12:13:41.677896 1170330 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 12:13:41.679625 1170330 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:13:41.719829 1170330 out.go:177] * Using the kvm2 driver based on user configuration
	I0407 12:13:41.721381 1170330 start.go:297] selected driver: kvm2
	I0407 12:13:41.721427 1170330 start.go:901] validating driver "kvm2" against <nil>
	I0407 12:13:41.721447 1170330 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 12:13:41.722459 1170330 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 12:13:41.722716 1170330 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20602-1162386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0407 12:13:41.742363 1170330 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0407 12:13:41.742451 1170330 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 12:13:41.742705 1170330 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 12:13:41.742747 1170330 cni.go:84] Creating CNI manager for ""
	I0407 12:13:41.742783 1170330 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 12:13:41.742797 1170330 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0407 12:13:41.742899 1170330 start.go:340] cluster config:
	{Name:addons-660533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-660533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:13:41.743072 1170330 iso.go:125] acquiring lock: {Name:mk51e1827709f7a3810dbd898083f8185ece65eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 12:13:41.745258 1170330 out.go:177] * Starting "addons-660533" primary control-plane node in "addons-660533" cluster
	I0407 12:13:41.746855 1170330 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 12:13:41.746943 1170330 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0407 12:13:41.746955 1170330 cache.go:56] Caching tarball of preloaded images
	I0407 12:13:41.747068 1170330 preload.go:172] Found /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0407 12:13:41.747086 1170330 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0407 12:13:41.747457 1170330 profile.go:143] Saving config to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/config.json ...
	I0407 12:13:41.747500 1170330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/config.json: {Name:mk3ebddde4516e373b8a3202f53242f10f00b94b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:13:41.747712 1170330 start.go:360] acquireMachinesLock for addons-660533: {Name:mk51d4c744fa92d56cf6ac11b1e792c85ef6709a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 12:13:41.747788 1170330 start.go:364] duration metric: took 55.856µs to acquireMachinesLock for "addons-660533"
	I0407 12:13:41.747818 1170330 start.go:93] Provisioning new machine with config: &{Name:addons-660533 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:addons-660533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0407 12:13:41.747906 1170330 start.go:125] createHost starting for "" (driver="kvm2")
	I0407 12:13:41.750229 1170330 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0407 12:13:41.750469 1170330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:13:41.750530 1170330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:13:41.767341 1170330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45915
	I0407 12:13:41.768032 1170330 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:13:41.768646 1170330 main.go:141] libmachine: Using API Version  1
	I0407 12:13:41.768674 1170330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:13:41.769260 1170330 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:13:41.769577 1170330 main.go:141] libmachine: (addons-660533) Calling .GetMachineName
	I0407 12:13:41.769840 1170330 main.go:141] libmachine: (addons-660533) Calling .DriverName
	I0407 12:13:41.770106 1170330 start.go:159] libmachine.API.Create for "addons-660533" (driver="kvm2")
	I0407 12:13:41.770195 1170330 client.go:168] LocalClient.Create starting
	I0407 12:13:41.770273 1170330 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem
	I0407 12:13:42.234824 1170330 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem
	I0407 12:13:42.335195 1170330 main.go:141] libmachine: Running pre-create checks...
	I0407 12:13:42.335226 1170330 main.go:141] libmachine: (addons-660533) Calling .PreCreateCheck
	I0407 12:13:42.335861 1170330 main.go:141] libmachine: (addons-660533) Calling .GetConfigRaw
	I0407 12:13:42.336521 1170330 main.go:141] libmachine: Creating machine...
	I0407 12:13:42.336541 1170330 main.go:141] libmachine: (addons-660533) Calling .Create
	I0407 12:13:42.336820 1170330 main.go:141] libmachine: (addons-660533) creating KVM machine...
	I0407 12:13:42.336843 1170330 main.go:141] libmachine: (addons-660533) creating network...
	I0407 12:13:42.340794 1170330 main.go:141] libmachine: (addons-660533) DBG | found existing default KVM network
	I0407 12:13:42.342153 1170330 main.go:141] libmachine: (addons-660533) DBG | I0407 12:13:42.341811 1170353 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000013810}
	I0407 12:13:42.342197 1170330 main.go:141] libmachine: (addons-660533) DBG | created network xml: 
	I0407 12:13:42.342215 1170330 main.go:141] libmachine: (addons-660533) DBG | <network>
	I0407 12:13:42.342221 1170330 main.go:141] libmachine: (addons-660533) DBG |   <name>mk-addons-660533</name>
	I0407 12:13:42.342230 1170330 main.go:141] libmachine: (addons-660533) DBG |   <dns enable='no'/>
	I0407 12:13:42.342235 1170330 main.go:141] libmachine: (addons-660533) DBG |   
	I0407 12:13:42.342249 1170330 main.go:141] libmachine: (addons-660533) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0407 12:13:42.342275 1170330 main.go:141] libmachine: (addons-660533) DBG |     <dhcp>
	I0407 12:13:42.342282 1170330 main.go:141] libmachine: (addons-660533) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0407 12:13:42.342289 1170330 main.go:141] libmachine: (addons-660533) DBG |     </dhcp>
	I0407 12:13:42.342296 1170330 main.go:141] libmachine: (addons-660533) DBG |   </ip>
	I0407 12:13:42.342303 1170330 main.go:141] libmachine: (addons-660533) DBG |   
	I0407 12:13:42.342312 1170330 main.go:141] libmachine: (addons-660533) DBG | </network>
	I0407 12:13:42.342325 1170330 main.go:141] libmachine: (addons-660533) DBG | 
	I0407 12:13:42.350563 1170330 main.go:141] libmachine: (addons-660533) DBG | trying to create private KVM network mk-addons-660533 192.168.39.0/24...
	I0407 12:13:42.455035 1170330 main.go:141] libmachine: (addons-660533) setting up store path in /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/addons-660533 ...
	I0407 12:13:42.455078 1170330 main.go:141] libmachine: (addons-660533) building disk image from file:///home/jenkins/minikube-integration/20602-1162386/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0407 12:13:42.455091 1170330 main.go:141] libmachine: (addons-660533) DBG | private KVM network mk-addons-660533 192.168.39.0/24 created
	I0407 12:13:42.455113 1170330 main.go:141] libmachine: (addons-660533) DBG | I0407 12:13:42.454984 1170353 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20602-1162386/.minikube
	I0407 12:13:42.455327 1170330 main.go:141] libmachine: (addons-660533) Downloading /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20602-1162386/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0407 12:13:42.782914 1170330 main.go:141] libmachine: (addons-660533) DBG | I0407 12:13:42.782744 1170353 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/addons-660533/id_rsa...
	I0407 12:13:42.872074 1170330 main.go:141] libmachine: (addons-660533) DBG | I0407 12:13:42.871844 1170353 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/addons-660533/addons-660533.rawdisk...
	I0407 12:13:42.872131 1170330 main.go:141] libmachine: (addons-660533) setting executable bit set on /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/addons-660533 (perms=drwx------)
	I0407 12:13:42.872142 1170330 main.go:141] libmachine: (addons-660533) DBG | Writing magic tar header
	I0407 12:13:42.872259 1170330 main.go:141] libmachine: (addons-660533) DBG | Writing SSH key tar header
	I0407 12:13:42.872301 1170330 main.go:141] libmachine: (addons-660533) DBG | I0407 12:13:42.871986 1170353 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/addons-660533 ...
	I0407 12:13:42.872322 1170330 main.go:141] libmachine: (addons-660533) setting executable bit set on /home/jenkins/minikube-integration/20602-1162386/.minikube/machines (perms=drwxr-xr-x)
	I0407 12:13:42.872339 1170330 main.go:141] libmachine: (addons-660533) setting executable bit set on /home/jenkins/minikube-integration/20602-1162386/.minikube (perms=drwxr-xr-x)
	I0407 12:13:42.872349 1170330 main.go:141] libmachine: (addons-660533) setting executable bit set on /home/jenkins/minikube-integration/20602-1162386 (perms=drwxrwxr-x)
	I0407 12:13:42.872363 1170330 main.go:141] libmachine: (addons-660533) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/addons-660533
	I0407 12:13:42.872377 1170330 main.go:141] libmachine: (addons-660533) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines
	I0407 12:13:42.872389 1170330 main.go:141] libmachine: (addons-660533) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20602-1162386/.minikube
	I0407 12:13:42.872400 1170330 main.go:141] libmachine: (addons-660533) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20602-1162386
	I0407 12:13:42.872407 1170330 main.go:141] libmachine: (addons-660533) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0407 12:13:42.872414 1170330 main.go:141] libmachine: (addons-660533) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0407 12:13:42.872426 1170330 main.go:141] libmachine: (addons-660533) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0407 12:13:42.872431 1170330 main.go:141] libmachine: (addons-660533) creating domain...
	I0407 12:13:42.872446 1170330 main.go:141] libmachine: (addons-660533) DBG | checking permissions on dir: /home/jenkins
	I0407 12:13:42.872553 1170330 main.go:141] libmachine: (addons-660533) DBG | checking permissions on dir: /home
	I0407 12:13:42.872573 1170330 main.go:141] libmachine: (addons-660533) DBG | skipping /home - not owner
	I0407 12:13:42.874154 1170330 main.go:141] libmachine: (addons-660533) define libvirt domain using xml: 
	I0407 12:13:42.874210 1170330 main.go:141] libmachine: (addons-660533) <domain type='kvm'>
	I0407 12:13:42.874222 1170330 main.go:141] libmachine: (addons-660533)   <name>addons-660533</name>
	I0407 12:13:42.874230 1170330 main.go:141] libmachine: (addons-660533)   <memory unit='MiB'>4000</memory>
	I0407 12:13:42.874239 1170330 main.go:141] libmachine: (addons-660533)   <vcpu>2</vcpu>
	I0407 12:13:42.874247 1170330 main.go:141] libmachine: (addons-660533)   <features>
	I0407 12:13:42.874258 1170330 main.go:141] libmachine: (addons-660533)     <acpi/>
	I0407 12:13:42.874271 1170330 main.go:141] libmachine: (addons-660533)     <apic/>
	I0407 12:13:42.874285 1170330 main.go:141] libmachine: (addons-660533)     <pae/>
	I0407 12:13:42.874318 1170330 main.go:141] libmachine: (addons-660533)     
	I0407 12:13:42.874328 1170330 main.go:141] libmachine: (addons-660533)   </features>
	I0407 12:13:42.874333 1170330 main.go:141] libmachine: (addons-660533)   <cpu mode='host-passthrough'>
	I0407 12:13:42.874345 1170330 main.go:141] libmachine: (addons-660533)   
	I0407 12:13:42.874353 1170330 main.go:141] libmachine: (addons-660533)   </cpu>
	I0407 12:13:42.874383 1170330 main.go:141] libmachine: (addons-660533)   <os>
	I0407 12:13:42.874397 1170330 main.go:141] libmachine: (addons-660533)     <type>hvm</type>
	I0407 12:13:42.874407 1170330 main.go:141] libmachine: (addons-660533)     <boot dev='cdrom'/>
	I0407 12:13:42.874419 1170330 main.go:141] libmachine: (addons-660533)     <boot dev='hd'/>
	I0407 12:13:42.874436 1170330 main.go:141] libmachine: (addons-660533)     <bootmenu enable='no'/>
	I0407 12:13:42.874444 1170330 main.go:141] libmachine: (addons-660533)   </os>
	I0407 12:13:42.874450 1170330 main.go:141] libmachine: (addons-660533)   <devices>
	I0407 12:13:42.874534 1170330 main.go:141] libmachine: (addons-660533)     <disk type='file' device='cdrom'>
	I0407 12:13:42.874569 1170330 main.go:141] libmachine: (addons-660533)       <source file='/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/addons-660533/boot2docker.iso'/>
	I0407 12:13:42.874580 1170330 main.go:141] libmachine: (addons-660533)       <target dev='hdc' bus='scsi'/>
	I0407 12:13:42.874587 1170330 main.go:141] libmachine: (addons-660533)       <readonly/>
	I0407 12:13:42.874592 1170330 main.go:141] libmachine: (addons-660533)     </disk>
	I0407 12:13:42.874642 1170330 main.go:141] libmachine: (addons-660533)     <disk type='file' device='disk'>
	I0407 12:13:42.874682 1170330 main.go:141] libmachine: (addons-660533)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0407 12:13:42.874700 1170330 main.go:141] libmachine: (addons-660533)       <source file='/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/addons-660533/addons-660533.rawdisk'/>
	I0407 12:13:42.874712 1170330 main.go:141] libmachine: (addons-660533)       <target dev='hda' bus='virtio'/>
	I0407 12:13:42.874853 1170330 main.go:141] libmachine: (addons-660533)     </disk>
	I0407 12:13:42.874872 1170330 main.go:141] libmachine: (addons-660533)     <interface type='network'>
	I0407 12:13:42.874888 1170330 main.go:141] libmachine: (addons-660533)       <source network='mk-addons-660533'/>
	I0407 12:13:42.874903 1170330 main.go:141] libmachine: (addons-660533)       <model type='virtio'/>
	I0407 12:13:42.874989 1170330 main.go:141] libmachine: (addons-660533)     </interface>
	I0407 12:13:42.875030 1170330 main.go:141] libmachine: (addons-660533)     <interface type='network'>
	I0407 12:13:42.875043 1170330 main.go:141] libmachine: (addons-660533)       <source network='default'/>
	I0407 12:13:42.875051 1170330 main.go:141] libmachine: (addons-660533)       <model type='virtio'/>
	I0407 12:13:42.875062 1170330 main.go:141] libmachine: (addons-660533)     </interface>
	I0407 12:13:42.875073 1170330 main.go:141] libmachine: (addons-660533)     <serial type='pty'>
	I0407 12:13:42.875085 1170330 main.go:141] libmachine: (addons-660533)       <target port='0'/>
	I0407 12:13:42.875093 1170330 main.go:141] libmachine: (addons-660533)     </serial>
	I0407 12:13:42.875106 1170330 main.go:141] libmachine: (addons-660533)     <console type='pty'>
	I0407 12:13:42.875126 1170330 main.go:141] libmachine: (addons-660533)       <target type='serial' port='0'/>
	I0407 12:13:42.875138 1170330 main.go:141] libmachine: (addons-660533)     </console>
	I0407 12:13:42.875148 1170330 main.go:141] libmachine: (addons-660533)     <rng model='virtio'>
	I0407 12:13:42.875161 1170330 main.go:141] libmachine: (addons-660533)       <backend model='random'>/dev/random</backend>
	I0407 12:13:42.875170 1170330 main.go:141] libmachine: (addons-660533)     </rng>
	I0407 12:13:42.875179 1170330 main.go:141] libmachine: (addons-660533)     
	I0407 12:13:42.875184 1170330 main.go:141] libmachine: (addons-660533)     
	I0407 12:13:42.875192 1170330 main.go:141] libmachine: (addons-660533)   </devices>
	I0407 12:13:42.875203 1170330 main.go:141] libmachine: (addons-660533) </domain>
	I0407 12:13:42.875343 1170330 main.go:141] libmachine: (addons-660533) 
	I0407 12:13:42.886115 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:65:70:cc in network default
	I0407 12:13:42.887487 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:13:42.887529 1170330 main.go:141] libmachine: (addons-660533) starting domain...
	I0407 12:13:42.887535 1170330 main.go:141] libmachine: (addons-660533) ensuring networks are active...
	I0407 12:13:42.888757 1170330 main.go:141] libmachine: (addons-660533) Ensuring network default is active
	I0407 12:13:42.889900 1170330 main.go:141] libmachine: (addons-660533) Ensuring network mk-addons-660533 is active
	I0407 12:13:42.891145 1170330 main.go:141] libmachine: (addons-660533) getting domain XML...
	I0407 12:13:42.893058 1170330 main.go:141] libmachine: (addons-660533) creating domain...
	I0407 12:13:44.702457 1170330 main.go:141] libmachine: (addons-660533) waiting for IP...
	I0407 12:13:44.703625 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:13:44.704415 1170330 main.go:141] libmachine: (addons-660533) DBG | unable to find current IP address of domain addons-660533 in network mk-addons-660533
	I0407 12:13:44.704516 1170330 main.go:141] libmachine: (addons-660533) DBG | I0407 12:13:44.704417 1170353 retry.go:31] will retry after 206.874483ms: waiting for domain to come up
	I0407 12:13:44.913192 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:13:44.913982 1170330 main.go:141] libmachine: (addons-660533) DBG | unable to find current IP address of domain addons-660533 in network mk-addons-660533
	I0407 12:13:44.914013 1170330 main.go:141] libmachine: (addons-660533) DBG | I0407 12:13:44.913934 1170353 retry.go:31] will retry after 292.201006ms: waiting for domain to come up
	I0407 12:13:45.207635 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:13:45.208213 1170330 main.go:141] libmachine: (addons-660533) DBG | unable to find current IP address of domain addons-660533 in network mk-addons-660533
	I0407 12:13:45.208250 1170330 main.go:141] libmachine: (addons-660533) DBG | I0407 12:13:45.208168 1170353 retry.go:31] will retry after 349.806046ms: waiting for domain to come up
	I0407 12:13:45.559911 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:13:45.560677 1170330 main.go:141] libmachine: (addons-660533) DBG | unable to find current IP address of domain addons-660533 in network mk-addons-660533
	I0407 12:13:45.560706 1170330 main.go:141] libmachine: (addons-660533) DBG | I0407 12:13:45.560633 1170353 retry.go:31] will retry after 447.04317ms: waiting for domain to come up
	I0407 12:13:46.009823 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:13:46.010469 1170330 main.go:141] libmachine: (addons-660533) DBG | unable to find current IP address of domain addons-660533 in network mk-addons-660533
	I0407 12:13:46.010509 1170330 main.go:141] libmachine: (addons-660533) DBG | I0407 12:13:46.010407 1170353 retry.go:31] will retry after 696.050087ms: waiting for domain to come up
	I0407 12:13:46.709082 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:13:46.709990 1170330 main.go:141] libmachine: (addons-660533) DBG | unable to find current IP address of domain addons-660533 in network mk-addons-660533
	I0407 12:13:46.710044 1170330 main.go:141] libmachine: (addons-660533) DBG | I0407 12:13:46.709893 1170353 retry.go:31] will retry after 758.476006ms: waiting for domain to come up
	I0407 12:13:47.470023 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:13:47.470576 1170330 main.go:141] libmachine: (addons-660533) DBG | unable to find current IP address of domain addons-660533 in network mk-addons-660533
	I0407 12:13:47.470608 1170330 main.go:141] libmachine: (addons-660533) DBG | I0407 12:13:47.470535 1170353 retry.go:31] will retry after 768.26527ms: waiting for domain to come up
	I0407 12:13:48.240697 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:13:48.241292 1170330 main.go:141] libmachine: (addons-660533) DBG | unable to find current IP address of domain addons-660533 in network mk-addons-660533
	I0407 12:13:48.241317 1170330 main.go:141] libmachine: (addons-660533) DBG | I0407 12:13:48.241269 1170353 retry.go:31] will retry after 1.449307064s: waiting for domain to come up
	I0407 12:13:49.693362 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:13:49.694144 1170330 main.go:141] libmachine: (addons-660533) DBG | unable to find current IP address of domain addons-660533 in network mk-addons-660533
	I0407 12:13:49.694189 1170330 main.go:141] libmachine: (addons-660533) DBG | I0407 12:13:49.694061 1170353 retry.go:31] will retry after 1.260442195s: waiting for domain to come up
	I0407 12:13:50.956942 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:13:50.957492 1170330 main.go:141] libmachine: (addons-660533) DBG | unable to find current IP address of domain addons-660533 in network mk-addons-660533
	I0407 12:13:50.957518 1170330 main.go:141] libmachine: (addons-660533) DBG | I0407 12:13:50.957464 1170353 retry.go:31] will retry after 1.611256338s: waiting for domain to come up
	I0407 12:13:52.570426 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:13:52.571029 1170330 main.go:141] libmachine: (addons-660533) DBG | unable to find current IP address of domain addons-660533 in network mk-addons-660533
	I0407 12:13:52.571060 1170330 main.go:141] libmachine: (addons-660533) DBG | I0407 12:13:52.570977 1170353 retry.go:31] will retry after 2.612567012s: waiting for domain to come up
	I0407 12:13:55.186832 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:13:55.187737 1170330 main.go:141] libmachine: (addons-660533) DBG | unable to find current IP address of domain addons-660533 in network mk-addons-660533
	I0407 12:13:55.187786 1170330 main.go:141] libmachine: (addons-660533) DBG | I0407 12:13:55.187627 1170353 retry.go:31] will retry after 2.233209237s: waiting for domain to come up
	I0407 12:13:57.422919 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:13:57.424331 1170330 main.go:141] libmachine: (addons-660533) DBG | unable to find current IP address of domain addons-660533 in network mk-addons-660533
	I0407 12:13:57.424394 1170330 main.go:141] libmachine: (addons-660533) DBG | I0407 12:13:57.424174 1170353 retry.go:31] will retry after 3.097595729s: waiting for domain to come up
	I0407 12:14:00.523292 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:00.523669 1170330 main.go:141] libmachine: (addons-660533) DBG | unable to find current IP address of domain addons-660533 in network mk-addons-660533
	I0407 12:14:00.523731 1170330 main.go:141] libmachine: (addons-660533) DBG | I0407 12:14:00.523655 1170353 retry.go:31] will retry after 5.510105315s: waiting for domain to come up
	I0407 12:14:06.039279 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:06.040263 1170330 main.go:141] libmachine: (addons-660533) found domain IP: 192.168.39.112
	I0407 12:14:06.040330 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has current primary IP address 192.168.39.112 and MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:06.040339 1170330 main.go:141] libmachine: (addons-660533) reserving static IP address...
	I0407 12:14:06.041153 1170330 main.go:141] libmachine: (addons-660533) DBG | unable to find host DHCP lease matching {name: "addons-660533", mac: "52:54:00:1e:96:60", ip: "192.168.39.112"} in network mk-addons-660533
	I0407 12:14:06.157061 1170330 main.go:141] libmachine: (addons-660533) reserved static IP address 192.168.39.112 for domain addons-660533
	I0407 12:14:06.157109 1170330 main.go:141] libmachine: (addons-660533) DBG | Getting to WaitForSSH function...
	I0407 12:14:06.157118 1170330 main.go:141] libmachine: (addons-660533) waiting for SSH...
	I0407 12:14:06.160662 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:06.161310 1170330 main.go:141] libmachine: (addons-660533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:96:60", ip: ""} in network mk-addons-660533: {Iface:virbr1 ExpiryTime:2025-04-07 13:13:58 +0000 UTC Type:0 Mac:52:54:00:1e:96:60 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1e:96:60}
	I0407 12:14:06.161360 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined IP address 192.168.39.112 and MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:06.161499 1170330 main.go:141] libmachine: (addons-660533) DBG | Using SSH client type: external
	I0407 12:14:06.161527 1170330 main.go:141] libmachine: (addons-660533) DBG | Using SSH private key: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/addons-660533/id_rsa (-rw-------)
	I0407 12:14:06.161577 1170330 main.go:141] libmachine: (addons-660533) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/addons-660533/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0407 12:14:06.161594 1170330 main.go:141] libmachine: (addons-660533) DBG | About to run SSH command:
	I0407 12:14:06.161631 1170330 main.go:141] libmachine: (addons-660533) DBG | exit 0
	I0407 12:14:06.290942 1170330 main.go:141] libmachine: (addons-660533) DBG | SSH cmd err, output: <nil>: 
	I0407 12:14:06.291401 1170330 main.go:141] libmachine: (addons-660533) KVM machine creation complete
	I0407 12:14:06.291758 1170330 main.go:141] libmachine: (addons-660533) Calling .GetConfigRaw
	I0407 12:14:06.292414 1170330 main.go:141] libmachine: (addons-660533) Calling .DriverName
	I0407 12:14:06.292760 1170330 main.go:141] libmachine: (addons-660533) Calling .DriverName
	I0407 12:14:06.293057 1170330 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0407 12:14:06.293078 1170330 main.go:141] libmachine: (addons-660533) Calling .GetState
	I0407 12:14:06.295360 1170330 main.go:141] libmachine: Detecting operating system of created instance...
	I0407 12:14:06.295385 1170330 main.go:141] libmachine: Waiting for SSH to be available...
	I0407 12:14:06.295391 1170330 main.go:141] libmachine: Getting to WaitForSSH function...
	I0407 12:14:06.295398 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHHostname
	I0407 12:14:06.299615 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:06.300187 1170330 main.go:141] libmachine: (addons-660533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:96:60", ip: ""} in network mk-addons-660533: {Iface:virbr1 ExpiryTime:2025-04-07 13:13:58 +0000 UTC Type:0 Mac:52:54:00:1e:96:60 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-660533 Clientid:01:52:54:00:1e:96:60}
	I0407 12:14:06.300227 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined IP address 192.168.39.112 and MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:06.300425 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHPort
	I0407 12:14:06.300714 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHKeyPath
	I0407 12:14:06.301047 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHKeyPath
	I0407 12:14:06.301396 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHUsername
	I0407 12:14:06.301673 1170330 main.go:141] libmachine: Using SSH client type: native
	I0407 12:14:06.301996 1170330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I0407 12:14:06.302013 1170330 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0407 12:14:06.414153 1170330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 12:14:06.414185 1170330 main.go:141] libmachine: Detecting the provisioner...
	I0407 12:14:06.414197 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHHostname
	I0407 12:14:06.419879 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:06.420431 1170330 main.go:141] libmachine: (addons-660533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:96:60", ip: ""} in network mk-addons-660533: {Iface:virbr1 ExpiryTime:2025-04-07 13:13:58 +0000 UTC Type:0 Mac:52:54:00:1e:96:60 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-660533 Clientid:01:52:54:00:1e:96:60}
	I0407 12:14:06.420474 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined IP address 192.168.39.112 and MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:06.421073 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHPort
	I0407 12:14:06.421487 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHKeyPath
	I0407 12:14:06.421829 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHKeyPath
	I0407 12:14:06.422265 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHUsername
	I0407 12:14:06.422881 1170330 main.go:141] libmachine: Using SSH client type: native
	I0407 12:14:06.423649 1170330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I0407 12:14:06.423719 1170330 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0407 12:14:06.539431 1170330 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0407 12:14:06.539618 1170330 main.go:141] libmachine: found compatible host: buildroot
	I0407 12:14:06.539641 1170330 main.go:141] libmachine: Provisioning with buildroot...
	I0407 12:14:06.539659 1170330 main.go:141] libmachine: (addons-660533) Calling .GetMachineName
	I0407 12:14:06.540268 1170330 buildroot.go:166] provisioning hostname "addons-660533"
	I0407 12:14:06.540294 1170330 main.go:141] libmachine: (addons-660533) Calling .GetMachineName
	I0407 12:14:06.540766 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHHostname
	I0407 12:14:06.546452 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:06.547040 1170330 main.go:141] libmachine: (addons-660533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:96:60", ip: ""} in network mk-addons-660533: {Iface:virbr1 ExpiryTime:2025-04-07 13:13:58 +0000 UTC Type:0 Mac:52:54:00:1e:96:60 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-660533 Clientid:01:52:54:00:1e:96:60}
	I0407 12:14:06.547089 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined IP address 192.168.39.112 and MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:06.547429 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHPort
	I0407 12:14:06.547908 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHKeyPath
	I0407 12:14:06.548362 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHKeyPath
	I0407 12:14:06.548775 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHUsername
	I0407 12:14:06.549192 1170330 main.go:141] libmachine: Using SSH client type: native
	I0407 12:14:06.549569 1170330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I0407 12:14:06.549592 1170330 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-660533 && echo "addons-660533" | sudo tee /etc/hostname
	I0407 12:14:06.681984 1170330 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-660533
	
	I0407 12:14:06.682023 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHHostname
	I0407 12:14:06.687290 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:06.688492 1170330 main.go:141] libmachine: (addons-660533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:96:60", ip: ""} in network mk-addons-660533: {Iface:virbr1 ExpiryTime:2025-04-07 13:13:58 +0000 UTC Type:0 Mac:52:54:00:1e:96:60 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-660533 Clientid:01:52:54:00:1e:96:60}
	I0407 12:14:06.688561 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined IP address 192.168.39.112 and MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:06.689207 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHPort
	I0407 12:14:06.689806 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHKeyPath
	I0407 12:14:06.690133 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHKeyPath
	I0407 12:14:06.690392 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHUsername
	I0407 12:14:06.690649 1170330 main.go:141] libmachine: Using SSH client type: native
	I0407 12:14:06.690917 1170330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I0407 12:14:06.690934 1170330 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-660533' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-660533/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-660533' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 12:14:06.815265 1170330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 12:14:06.815308 1170330 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20602-1162386/.minikube CaCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20602-1162386/.minikube}
	I0407 12:14:06.815347 1170330 buildroot.go:174] setting up certificates
	I0407 12:14:06.815364 1170330 provision.go:84] configureAuth start
	I0407 12:14:06.815375 1170330 main.go:141] libmachine: (addons-660533) Calling .GetMachineName
	I0407 12:14:06.815827 1170330 main.go:141] libmachine: (addons-660533) Calling .GetIP
	I0407 12:14:06.821957 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:06.822707 1170330 main.go:141] libmachine: (addons-660533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:96:60", ip: ""} in network mk-addons-660533: {Iface:virbr1 ExpiryTime:2025-04-07 13:13:58 +0000 UTC Type:0 Mac:52:54:00:1e:96:60 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-660533 Clientid:01:52:54:00:1e:96:60}
	I0407 12:14:06.822741 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined IP address 192.168.39.112 and MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:06.823629 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHHostname
	I0407 12:14:06.829478 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:06.830653 1170330 main.go:141] libmachine: (addons-660533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:96:60", ip: ""} in network mk-addons-660533: {Iface:virbr1 ExpiryTime:2025-04-07 13:13:58 +0000 UTC Type:0 Mac:52:54:00:1e:96:60 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-660533 Clientid:01:52:54:00:1e:96:60}
	I0407 12:14:06.830802 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined IP address 192.168.39.112 and MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:06.831149 1170330 provision.go:143] copyHostCerts
	I0407 12:14:06.831255 1170330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem (1123 bytes)
	I0407 12:14:06.831390 1170330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem (1675 bytes)
	I0407 12:14:06.831460 1170330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem (1078 bytes)
	I0407 12:14:06.831517 1170330 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem org=jenkins.addons-660533 san=[127.0.0.1 192.168.39.112 addons-660533 localhost minikube]
	I0407 12:14:07.284972 1170330 provision.go:177] copyRemoteCerts
	I0407 12:14:07.285051 1170330 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 12:14:07.285083 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHHostname
	I0407 12:14:07.288948 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:07.289608 1170330 main.go:141] libmachine: (addons-660533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:96:60", ip: ""} in network mk-addons-660533: {Iface:virbr1 ExpiryTime:2025-04-07 13:13:58 +0000 UTC Type:0 Mac:52:54:00:1e:96:60 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-660533 Clientid:01:52:54:00:1e:96:60}
	I0407 12:14:07.289638 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined IP address 192.168.39.112 and MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:07.289940 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHPort
	I0407 12:14:07.290180 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHKeyPath
	I0407 12:14:07.290359 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHUsername
	I0407 12:14:07.290533 1170330 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/addons-660533/id_rsa Username:docker}
	I0407 12:14:07.372534 1170330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 12:14:07.402006 1170330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0407 12:14:07.434940 1170330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0407 12:14:07.463333 1170330 provision.go:87] duration metric: took 647.952114ms to configureAuth
	I0407 12:14:07.463372 1170330 buildroot.go:189] setting minikube options for container-runtime
	I0407 12:14:07.463608 1170330 config.go:182] Loaded profile config "addons-660533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 12:14:07.463717 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHHostname
	I0407 12:14:07.468046 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:07.468798 1170330 main.go:141] libmachine: (addons-660533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:96:60", ip: ""} in network mk-addons-660533: {Iface:virbr1 ExpiryTime:2025-04-07 13:13:58 +0000 UTC Type:0 Mac:52:54:00:1e:96:60 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-660533 Clientid:01:52:54:00:1e:96:60}
	I0407 12:14:07.468844 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined IP address 192.168.39.112 and MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:07.469091 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHPort
	I0407 12:14:07.469377 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHKeyPath
	I0407 12:14:07.469795 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHKeyPath
	I0407 12:14:07.470136 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHUsername
	I0407 12:14:07.470525 1170330 main.go:141] libmachine: Using SSH client type: native
	I0407 12:14:07.470926 1170330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I0407 12:14:07.470967 1170330 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0407 12:14:07.709784 1170330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0407 12:14:07.709819 1170330 main.go:141] libmachine: Checking connection to Docker...
	I0407 12:14:07.709830 1170330 main.go:141] libmachine: (addons-660533) Calling .GetURL
	I0407 12:14:07.712238 1170330 main.go:141] libmachine: (addons-660533) DBG | using libvirt version 6000000
	I0407 12:14:07.715338 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:07.715789 1170330 main.go:141] libmachine: (addons-660533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:96:60", ip: ""} in network mk-addons-660533: {Iface:virbr1 ExpiryTime:2025-04-07 13:13:58 +0000 UTC Type:0 Mac:52:54:00:1e:96:60 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-660533 Clientid:01:52:54:00:1e:96:60}
	I0407 12:14:07.715822 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined IP address 192.168.39.112 and MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:07.716103 1170330 main.go:141] libmachine: Docker is up and running!
	I0407 12:14:07.716120 1170330 main.go:141] libmachine: Reticulating splines...
	I0407 12:14:07.716130 1170330 client.go:171] duration metric: took 25.945909406s to LocalClient.Create
	I0407 12:14:07.716166 1170330 start.go:167] duration metric: took 25.946062209s to libmachine.API.Create "addons-660533"
	I0407 12:14:07.716187 1170330 start.go:293] postStartSetup for "addons-660533" (driver="kvm2")
	I0407 12:14:07.716207 1170330 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 12:14:07.716234 1170330 main.go:141] libmachine: (addons-660533) Calling .DriverName
	I0407 12:14:07.716512 1170330 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 12:14:07.716543 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHHostname
	I0407 12:14:07.719630 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:07.720192 1170330 main.go:141] libmachine: (addons-660533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:96:60", ip: ""} in network mk-addons-660533: {Iface:virbr1 ExpiryTime:2025-04-07 13:13:58 +0000 UTC Type:0 Mac:52:54:00:1e:96:60 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-660533 Clientid:01:52:54:00:1e:96:60}
	I0407 12:14:07.720250 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined IP address 192.168.39.112 and MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:07.720683 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHPort
	I0407 12:14:07.721204 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHKeyPath
	I0407 12:14:07.721499 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHUsername
	I0407 12:14:07.721763 1170330 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/addons-660533/id_rsa Username:docker}
	I0407 12:14:07.809167 1170330 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 12:14:07.814024 1170330 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 12:14:07.814058 1170330 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-1162386/.minikube/addons for local assets ...
	I0407 12:14:07.814145 1170330 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-1162386/.minikube/files for local assets ...
	I0407 12:14:07.814168 1170330 start.go:296] duration metric: took 97.970546ms for postStartSetup
	I0407 12:14:07.814214 1170330 main.go:141] libmachine: (addons-660533) Calling .GetConfigRaw
	I0407 12:14:07.814852 1170330 main.go:141] libmachine: (addons-660533) Calling .GetIP
	I0407 12:14:07.818132 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:07.818805 1170330 main.go:141] libmachine: (addons-660533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:96:60", ip: ""} in network mk-addons-660533: {Iface:virbr1 ExpiryTime:2025-04-07 13:13:58 +0000 UTC Type:0 Mac:52:54:00:1e:96:60 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-660533 Clientid:01:52:54:00:1e:96:60}
	I0407 12:14:07.818865 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined IP address 192.168.39.112 and MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:07.819277 1170330 profile.go:143] Saving config to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/config.json ...
	I0407 12:14:07.819515 1170330 start.go:128] duration metric: took 26.071592609s to createHost
	I0407 12:14:07.819549 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHHostname
	I0407 12:14:07.824965 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:07.825645 1170330 main.go:141] libmachine: (addons-660533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:96:60", ip: ""} in network mk-addons-660533: {Iface:virbr1 ExpiryTime:2025-04-07 13:13:58 +0000 UTC Type:0 Mac:52:54:00:1e:96:60 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-660533 Clientid:01:52:54:00:1e:96:60}
	I0407 12:14:07.825694 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined IP address 192.168.39.112 and MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:07.826104 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHPort
	I0407 12:14:07.826449 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHKeyPath
	I0407 12:14:07.826827 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHKeyPath
	I0407 12:14:07.827192 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHUsername
	I0407 12:14:07.827486 1170330 main.go:141] libmachine: Using SSH client type: native
	I0407 12:14:07.827737 1170330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I0407 12:14:07.827751 1170330 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0407 12:14:07.935674 1170330 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744028047.908234516
	
	I0407 12:14:07.935707 1170330 fix.go:216] guest clock: 1744028047.908234516
	I0407 12:14:07.935715 1170330 fix.go:229] Guest: 2025-04-07 12:14:07.908234516 +0000 UTC Remote: 2025-04-07 12:14:07.819529379 +0000 UTC m=+26.203600248 (delta=88.705137ms)
	I0407 12:14:07.935740 1170330 fix.go:200] guest clock delta is within tolerance: 88.705137ms
	I0407 12:14:07.935747 1170330 start.go:83] releasing machines lock for "addons-660533", held for 26.187944054s
	I0407 12:14:07.935791 1170330 main.go:141] libmachine: (addons-660533) Calling .DriverName
	I0407 12:14:07.936144 1170330 main.go:141] libmachine: (addons-660533) Calling .GetIP
	I0407 12:14:07.939950 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:07.940564 1170330 main.go:141] libmachine: (addons-660533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:96:60", ip: ""} in network mk-addons-660533: {Iface:virbr1 ExpiryTime:2025-04-07 13:13:58 +0000 UTC Type:0 Mac:52:54:00:1e:96:60 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-660533 Clientid:01:52:54:00:1e:96:60}
	I0407 12:14:07.940615 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined IP address 192.168.39.112 and MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:07.940893 1170330 main.go:141] libmachine: (addons-660533) Calling .DriverName
	I0407 12:14:07.941735 1170330 main.go:141] libmachine: (addons-660533) Calling .DriverName
	I0407 12:14:07.942082 1170330 main.go:141] libmachine: (addons-660533) Calling .DriverName
	I0407 12:14:07.942352 1170330 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0407 12:14:07.942434 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHHostname
	I0407 12:14:07.942529 1170330 ssh_runner.go:195] Run: cat /version.json
	I0407 12:14:07.942557 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHHostname
	I0407 12:14:07.948565 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:07.948604 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:07.949415 1170330 main.go:141] libmachine: (addons-660533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:96:60", ip: ""} in network mk-addons-660533: {Iface:virbr1 ExpiryTime:2025-04-07 13:13:58 +0000 UTC Type:0 Mac:52:54:00:1e:96:60 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-660533 Clientid:01:52:54:00:1e:96:60}
	I0407 12:14:07.949448 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined IP address 192.168.39.112 and MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:07.949472 1170330 main.go:141] libmachine: (addons-660533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:96:60", ip: ""} in network mk-addons-660533: {Iface:virbr1 ExpiryTime:2025-04-07 13:13:58 +0000 UTC Type:0 Mac:52:54:00:1e:96:60 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-660533 Clientid:01:52:54:00:1e:96:60}
	I0407 12:14:07.949491 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined IP address 192.168.39.112 and MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:07.949799 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHPort
	I0407 12:14:07.950058 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHPort
	I0407 12:14:07.950216 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHKeyPath
	I0407 12:14:07.950391 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHUsername
	I0407 12:14:07.950490 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHKeyPath
	I0407 12:14:07.950609 1170330 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/addons-660533/id_rsa Username:docker}
	I0407 12:14:07.950663 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHUsername
	I0407 12:14:07.950826 1170330 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/addons-660533/id_rsa Username:docker}
	I0407 12:14:08.049455 1170330 ssh_runner.go:195] Run: systemctl --version
	I0407 12:14:08.056308 1170330 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0407 12:14:08.223758 1170330 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0407 12:14:08.230920 1170330 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 12:14:08.231010 1170330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 12:14:08.247983 1170330 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0407 12:14:08.248033 1170330 start.go:495] detecting cgroup driver to use...
	I0407 12:14:08.248103 1170330 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 12:14:08.268944 1170330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 12:14:08.286530 1170330 docker.go:217] disabling cri-docker service (if available) ...
	I0407 12:14:08.286617 1170330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0407 12:14:08.303828 1170330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0407 12:14:08.321282 1170330 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0407 12:14:08.459471 1170330 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0407 12:14:08.615786 1170330 docker.go:233] disabling docker service ...
	I0407 12:14:08.615903 1170330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0407 12:14:08.638191 1170330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0407 12:14:08.659360 1170330 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0407 12:14:08.817253 1170330 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0407 12:14:08.953591 1170330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0407 12:14:08.969381 1170330 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 12:14:08.992076 1170330 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0407 12:14:08.992153 1170330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 12:14:09.004477 1170330 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0407 12:14:09.004562 1170330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 12:14:09.016180 1170330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 12:14:09.028704 1170330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 12:14:09.041525 1170330 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 12:14:09.054352 1170330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 12:14:09.065923 1170330 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 12:14:09.087490 1170330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 12:14:09.099714 1170330 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 12:14:09.110433 1170330 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0407 12:14:09.110510 1170330 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0407 12:14:09.124020 1170330 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 12:14:09.135432 1170330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 12:14:09.260611 1170330 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0407 12:14:09.361332 1170330 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0407 12:14:09.361461 1170330 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0407 12:14:09.367293 1170330 start.go:563] Will wait 60s for crictl version
	I0407 12:14:09.367429 1170330 ssh_runner.go:195] Run: which crictl
	I0407 12:14:09.372677 1170330 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 12:14:09.418788 1170330 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0407 12:14:09.419098 1170330 ssh_runner.go:195] Run: crio --version
	I0407 12:14:09.452531 1170330 ssh_runner.go:195] Run: crio --version
	I0407 12:14:09.491682 1170330 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0407 12:14:09.494081 1170330 main.go:141] libmachine: (addons-660533) Calling .GetIP
	I0407 12:14:09.499476 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:09.500278 1170330 main.go:141] libmachine: (addons-660533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:96:60", ip: ""} in network mk-addons-660533: {Iface:virbr1 ExpiryTime:2025-04-07 13:13:58 +0000 UTC Type:0 Mac:52:54:00:1e:96:60 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-660533 Clientid:01:52:54:00:1e:96:60}
	I0407 12:14:09.500312 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined IP address 192.168.39.112 and MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:09.500665 1170330 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0407 12:14:09.505798 1170330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 12:14:09.520697 1170330 kubeadm.go:883] updating cluster {Name:addons-660533 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-660533 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.112 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 12:14:09.520835 1170330 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 12:14:09.520896 1170330 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 12:14:09.565902 1170330 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0407 12:14:09.566061 1170330 ssh_runner.go:195] Run: which lz4
	I0407 12:14:09.572318 1170330 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0407 12:14:09.578430 1170330 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0407 12:14:09.578499 1170330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0407 12:14:11.079171 1170330 crio.go:462] duration metric: took 1.506987762s to copy over tarball
	I0407 12:14:11.079263 1170330 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0407 12:14:13.871798 1170330 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.792494396s)
	I0407 12:14:13.871834 1170330 crio.go:469] duration metric: took 2.792629043s to extract the tarball
	I0407 12:14:13.871843 1170330 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0407 12:14:13.912834 1170330 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 12:14:13.960604 1170330 crio.go:514] all images are preloaded for cri-o runtime.
	I0407 12:14:13.960637 1170330 cache_images.go:84] Images are preloaded, skipping loading
	I0407 12:14:13.960650 1170330 kubeadm.go:934] updating node { 192.168.39.112 8443 v1.32.2 crio true true} ...
	I0407 12:14:13.960796 1170330 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-660533 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.112
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:addons-660533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 12:14:13.960894 1170330 ssh_runner.go:195] Run: crio config
	I0407 12:14:14.018241 1170330 cni.go:84] Creating CNI manager for ""
	I0407 12:14:14.018276 1170330 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 12:14:14.018314 1170330 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0407 12:14:14.018339 1170330 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.112 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-660533 NodeName:addons-660533 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.112"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.112 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0407 12:14:14.018499 1170330 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.112
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-660533"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.112"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.112"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 12:14:14.018580 1170330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0407 12:14:14.032047 1170330 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 12:14:14.032144 1170330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 12:14:14.043829 1170330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0407 12:14:14.067032 1170330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 12:14:14.090857 1170330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0407 12:14:14.111108 1170330 ssh_runner.go:195] Run: grep 192.168.39.112	control-plane.minikube.internal$ /etc/hosts
	I0407 12:14:14.116304 1170330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.112	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 12:14:14.131895 1170330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 12:14:14.286452 1170330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 12:14:14.312383 1170330 certs.go:68] Setting up /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533 for IP: 192.168.39.112
	I0407 12:14:14.312418 1170330 certs.go:194] generating shared ca certs ...
	I0407 12:14:14.312444 1170330 certs.go:226] acquiring lock for ca certs: {Name:mk8e89191fca7f2111bdd08c345368f593b0d5d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:14:14.312659 1170330 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.key
	I0407 12:14:14.645572 1170330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.crt ...
	I0407 12:14:14.645629 1170330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.crt: {Name:mkc95c940f069eb5f3e07d03ea267887c78b5ea7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:14:14.645867 1170330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.key ...
	I0407 12:14:14.645885 1170330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.key: {Name:mk5cc8aa93915df6d7ca5f29bf3e048d9507b2f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:14:14.646002 1170330 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.key
	I0407 12:14:14.859315 1170330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.crt ...
	I0407 12:14:14.859356 1170330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.crt: {Name:mk980613e0ee3e308da857326f148a61c03c1188 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:14:14.859579 1170330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.key ...
	I0407 12:14:14.859601 1170330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.key: {Name:mk89da2fc3455f9cf2a52d8c4b3db8ab009dcb80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:14:14.859711 1170330 certs.go:256] generating profile certs ...
	I0407 12:14:14.859776 1170330 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.key
	I0407 12:14:14.859794 1170330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt with IP's: []
	I0407 12:14:14.943633 1170330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt ...
	I0407 12:14:14.943676 1170330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: {Name:mk82c5b21f5537c9b8fbe72e948228e69159d503 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:14:14.943895 1170330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.key ...
	I0407 12:14:14.943911 1170330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.key: {Name:mkb8b94c9a4ecd69a4bdb99fdc61e1348161738d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:14:14.944021 1170330 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/apiserver.key.02b38bf2
	I0407 12:14:14.944055 1170330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/apiserver.crt.02b38bf2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.112]
	I0407 12:14:15.046929 1170330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/apiserver.crt.02b38bf2 ...
	I0407 12:14:15.046978 1170330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/apiserver.crt.02b38bf2: {Name:mke19a6de7063793311afa2b48a206970efaab5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:14:15.047325 1170330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/apiserver.key.02b38bf2 ...
	I0407 12:14:15.047355 1170330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/apiserver.key.02b38bf2: {Name:mk129b62bf02de30e8d0ca5bab3550a5dad204a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:14:15.047486 1170330 certs.go:381] copying /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/apiserver.crt.02b38bf2 -> /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/apiserver.crt
	I0407 12:14:15.047580 1170330 certs.go:385] copying /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/apiserver.key.02b38bf2 -> /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/apiserver.key
	I0407 12:14:15.047634 1170330 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/proxy-client.key
	I0407 12:14:15.047658 1170330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/proxy-client.crt with IP's: []
	I0407 12:14:15.182574 1170330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/proxy-client.crt ...
	I0407 12:14:15.182615 1170330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/proxy-client.crt: {Name:mk7d8b65e79386e669f5177917a82df189439b87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:14:15.182856 1170330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/proxy-client.key ...
	I0407 12:14:15.182877 1170330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/proxy-client.key: {Name:mk5c2ebf6507c37caae7e648cc00a7ed4d0cd743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:14:15.183129 1170330 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem (1675 bytes)
	I0407 12:14:15.183173 1170330 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem (1078 bytes)
	I0407 12:14:15.183197 1170330 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem (1123 bytes)
	I0407 12:14:15.183232 1170330 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem (1675 bytes)
	I0407 12:14:15.183995 1170330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 12:14:15.212773 1170330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 12:14:15.239030 1170330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 12:14:15.268503 1170330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0407 12:14:15.297451 1170330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0407 12:14:15.327562 1170330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0407 12:14:15.358443 1170330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 12:14:15.389139 1170330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0407 12:14:15.420083 1170330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 12:14:15.449648 1170330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 12:14:15.471312 1170330 ssh_runner.go:195] Run: openssl version
	I0407 12:14:15.479277 1170330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 12:14:15.493069 1170330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 12:14:15.500780 1170330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:14 /usr/share/ca-certificates/minikubeCA.pem
	I0407 12:14:15.500872 1170330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 12:14:15.508519 1170330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 12:14:15.526513 1170330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 12:14:15.531829 1170330 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0407 12:14:15.531907 1170330 kubeadm.go:392] StartCluster: {Name:addons-660533 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-660533 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.112 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:14:15.532022 1170330 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0407 12:14:15.532094 1170330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0407 12:14:15.582707 1170330 cri.go:89] found id: ""
	I0407 12:14:15.582786 1170330 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0407 12:14:15.594244 1170330 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0407 12:14:15.604983 1170330 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 12:14:15.616054 1170330 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 12:14:15.616082 1170330 kubeadm.go:157] found existing configuration files:
	
	I0407 12:14:15.616146 1170330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 12:14:15.627515 1170330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 12:14:15.627602 1170330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 12:14:15.639297 1170330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 12:14:15.650621 1170330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 12:14:15.650696 1170330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 12:14:15.662436 1170330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 12:14:15.673247 1170330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 12:14:15.673322 1170330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 12:14:15.687775 1170330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 12:14:15.699384 1170330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 12:14:15.699467 1170330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 12:14:15.711699 1170330 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0407 12:14:15.784008 1170330 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0407 12:14:15.784067 1170330 kubeadm.go:310] [preflight] Running pre-flight checks
	I0407 12:14:15.891464 1170330 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0407 12:14:15.891618 1170330 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0407 12:14:15.891775 1170330 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0407 12:14:15.906357 1170330 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 12:14:15.910348 1170330 out.go:235]   - Generating certificates and keys ...
	I0407 12:14:15.910478 1170330 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0407 12:14:15.910618 1170330 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0407 12:14:16.073659 1170330 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0407 12:14:16.133192 1170330 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0407 12:14:16.359328 1170330 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0407 12:14:16.578726 1170330 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0407 12:14:17.259727 1170330 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0407 12:14:17.260515 1170330 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-660533 localhost] and IPs [192.168.39.112 127.0.0.1 ::1]
	I0407 12:14:17.382010 1170330 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0407 12:14:17.382220 1170330 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-660533 localhost] and IPs [192.168.39.112 127.0.0.1 ::1]
	I0407 12:14:17.794029 1170330 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0407 12:14:17.974937 1170330 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0407 12:14:18.325822 1170330 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0407 12:14:18.325947 1170330 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 12:14:18.497613 1170330 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 12:14:18.635428 1170330 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0407 12:14:18.819360 1170330 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 12:14:18.929055 1170330 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 12:14:19.002525 1170330 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 12:14:19.003130 1170330 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 12:14:19.005803 1170330 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 12:14:19.008143 1170330 out.go:235]   - Booting up control plane ...
	I0407 12:14:19.008285 1170330 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 12:14:19.008357 1170330 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 12:14:19.008418 1170330 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 12:14:19.025676 1170330 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 12:14:19.034518 1170330 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 12:14:19.034682 1170330 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0407 12:14:19.200798 1170330 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0407 12:14:19.200959 1170330 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0407 12:14:20.201024 1170330 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001318388s
	I0407 12:14:20.201114 1170330 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0407 12:14:25.201487 1170330 kubeadm.go:310] [api-check] The API server is healthy after 5.003420036s
	I0407 12:14:25.220408 1170330 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0407 12:14:25.249257 1170330 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0407 12:14:25.292289 1170330 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0407 12:14:25.292511 1170330 kubeadm.go:310] [mark-control-plane] Marking the node addons-660533 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0407 12:14:25.311147 1170330 kubeadm.go:310] [bootstrap-token] Using token: fnahs1.t8g3kv43agssn02k
	I0407 12:14:25.313360 1170330 out.go:235]   - Configuring RBAC rules ...
	I0407 12:14:25.313542 1170330 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0407 12:14:25.320269 1170330 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0407 12:14:25.332447 1170330 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0407 12:14:25.341812 1170330 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0407 12:14:25.345774 1170330 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0407 12:14:25.349484 1170330 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0407 12:14:25.610060 1170330 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0407 12:14:26.076338 1170330 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0407 12:14:26.609182 1170330 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0407 12:14:26.610071 1170330 kubeadm.go:310] 
	I0407 12:14:26.610182 1170330 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0407 12:14:26.610191 1170330 kubeadm.go:310] 
	I0407 12:14:26.610294 1170330 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0407 12:14:26.610305 1170330 kubeadm.go:310] 
	I0407 12:14:26.610339 1170330 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0407 12:14:26.610428 1170330 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0407 12:14:26.610511 1170330 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0407 12:14:26.610522 1170330 kubeadm.go:310] 
	I0407 12:14:26.610638 1170330 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0407 12:14:26.610664 1170330 kubeadm.go:310] 
	I0407 12:14:26.610713 1170330 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0407 12:14:26.610725 1170330 kubeadm.go:310] 
	I0407 12:14:26.610805 1170330 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0407 12:14:26.610963 1170330 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0407 12:14:26.611083 1170330 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0407 12:14:26.611097 1170330 kubeadm.go:310] 
	I0407 12:14:26.611210 1170330 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0407 12:14:26.611349 1170330 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0407 12:14:26.611382 1170330 kubeadm.go:310] 
	I0407 12:14:26.611481 1170330 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fnahs1.t8g3kv43agssn02k \
	I0407 12:14:26.611647 1170330 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:001387253bb6e222db2af12e9fcbe5a1c3ee2a6f53970e58b5a7d017a3fc6618 \
	I0407 12:14:26.611687 1170330 kubeadm.go:310] 	--control-plane 
	I0407 12:14:26.611707 1170330 kubeadm.go:310] 
	I0407 12:14:26.611891 1170330 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0407 12:14:26.611921 1170330 kubeadm.go:310] 
	I0407 12:14:26.612063 1170330 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fnahs1.t8g3kv43agssn02k \
	I0407 12:14:26.612218 1170330 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:001387253bb6e222db2af12e9fcbe5a1c3ee2a6f53970e58b5a7d017a3fc6618 
	I0407 12:14:26.613391 1170330 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0407 12:14:26.613420 1170330 cni.go:84] Creating CNI manager for ""
	I0407 12:14:26.613431 1170330 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 12:14:26.616267 1170330 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0407 12:14:26.618282 1170330 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0407 12:14:26.630433 1170330 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0407 12:14:26.653829 1170330 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0407 12:14:26.653909 1170330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:14:26.653941 1170330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-660533 minikube.k8s.io/updated_at=2025_04_07T12_14_26_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=33e6edc58d2014d70e908473920ef4ac8eae1e43 minikube.k8s.io/name=addons-660533 minikube.k8s.io/primary=true
	I0407 12:14:26.681154 1170330 ops.go:34] apiserver oom_adj: -16
	I0407 12:14:26.814499 1170330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:14:27.315637 1170330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:14:27.815317 1170330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:14:28.315516 1170330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:14:28.814698 1170330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:14:29.314757 1170330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:14:29.815545 1170330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:14:30.314577 1170330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:14:30.815268 1170330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:14:31.314903 1170330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:14:31.426557 1170330 kubeadm.go:1113] duration metric: took 4.772713775s to wait for elevateKubeSystemPrivileges
	I0407 12:14:31.426626 1170330 kubeadm.go:394] duration metric: took 15.89472428s to StartCluster
	I0407 12:14:31.426657 1170330 settings.go:142] acquiring lock: {Name:mk19c4dc5d7992642f3fe5ca0bdb3ea65af01b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:14:31.426824 1170330 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20602-1162386/kubeconfig
	I0407 12:14:31.427329 1170330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/kubeconfig: {Name:mk712863958f7dbf2601dd82dc9ca7bea42ef42f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:14:31.427625 1170330 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.112 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0407 12:14:31.427701 1170330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0407 12:14:31.427762 1170330 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0407 12:14:31.427891 1170330 addons.go:69] Setting ingress=true in profile "addons-660533"
	I0407 12:14:31.427905 1170330 addons.go:69] Setting ingress-dns=true in profile "addons-660533"
	I0407 12:14:31.427924 1170330 addons.go:238] Setting addon ingress=true in "addons-660533"
	I0407 12:14:31.427930 1170330 addons.go:69] Setting inspektor-gadget=true in profile "addons-660533"
	I0407 12:14:31.427927 1170330 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-660533"
	I0407 12:14:31.427940 1170330 addons.go:238] Setting addon inspektor-gadget=true in "addons-660533"
	I0407 12:14:31.427961 1170330 config.go:182] Loaded profile config "addons-660533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 12:14:31.427979 1170330 host.go:66] Checking if "addons-660533" exists ...
	I0407 12:14:31.427981 1170330 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-660533"
	I0407 12:14:31.427991 1170330 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-660533"
	I0407 12:14:31.428005 1170330 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-660533"
	I0407 12:14:31.428017 1170330 host.go:66] Checking if "addons-660533" exists ...
	I0407 12:14:31.428026 1170330 host.go:66] Checking if "addons-660533" exists ...
	I0407 12:14:31.428030 1170330 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-660533"
	I0407 12:14:31.428047 1170330 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-660533"
	I0407 12:14:31.428310 1170330 addons.go:69] Setting volcano=true in profile "addons-660533"
	I0407 12:14:31.428342 1170330 addons.go:238] Setting addon volcano=true in "addons-660533"
	I0407 12:14:31.428347 1170330 addons.go:69] Setting volumesnapshots=true in profile "addons-660533"
	I0407 12:14:31.428379 1170330 host.go:66] Checking if "addons-660533" exists ...
	I0407 12:14:31.428385 1170330 addons.go:238] Setting addon volumesnapshots=true in "addons-660533"
	I0407 12:14:31.428425 1170330 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-660533"
	I0407 12:14:31.428432 1170330 host.go:66] Checking if "addons-660533" exists ...
	I0407 12:14:31.428449 1170330 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-660533"
	I0407 12:14:31.428479 1170330 host.go:66] Checking if "addons-660533" exists ...
	I0407 12:14:31.428542 1170330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:14:31.428547 1170330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:14:31.428561 1170330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:14:31.427986 1170330 addons.go:69] Setting metrics-server=true in profile "addons-660533"
	I0407 12:14:31.428575 1170330 addons.go:69] Setting registry=true in profile "addons-660533"
	I0407 12:14:31.428583 1170330 addons.go:238] Setting addon metrics-server=true in "addons-660533"
	I0407 12:14:31.428586 1170330 addons.go:238] Setting addon registry=true in "addons-660533"
	I0407 12:14:31.428587 1170330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:14:31.428591 1170330 addons.go:69] Setting gcp-auth=true in profile "addons-660533"
	I0407 12:14:31.428607 1170330 host.go:66] Checking if "addons-660533" exists ...
	I0407 12:14:31.428612 1170330 mustload.go:65] Loading cluster: addons-660533
	I0407 12:14:31.428613 1170330 addons.go:69] Setting cloud-spanner=true in profile "addons-660533"
	I0407 12:14:31.428626 1170330 addons.go:238] Setting addon cloud-spanner=true in "addons-660533"
	I0407 12:14:31.428644 1170330 host.go:66] Checking if "addons-660533" exists ...
	I0407 12:14:31.428724 1170330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:14:31.428762 1170330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:14:31.428962 1170330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:14:31.428982 1170330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:14:31.429000 1170330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:14:31.429005 1170330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:14:31.429022 1170330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:14:31.429022 1170330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:14:31.429083 1170330 addons.go:69] Setting storage-provisioner=true in profile "addons-660533"
	I0407 12:14:31.429095 1170330 addons.go:238] Setting addon storage-provisioner=true in "addons-660533"
	I0407 12:14:31.429117 1170330 host.go:66] Checking if "addons-660533" exists ...
	I0407 12:14:31.429146 1170330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:14:31.429176 1170330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:14:31.427925 1170330 addons.go:238] Setting addon ingress-dns=true in "addons-660533"
	I0407 12:14:31.428607 1170330 host.go:66] Checking if "addons-660533" exists ...
	I0407 12:14:31.429280 1170330 config.go:182] Loaded profile config "addons-660533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 12:14:31.429481 1170330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:14:31.429511 1170330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:14:31.428583 1170330 addons.go:69] Setting default-storageclass=true in profile "addons-660533"
	I0407 12:14:31.429560 1170330 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-660533"
	I0407 12:14:31.427894 1170330 addons.go:69] Setting yakd=true in profile "addons-660533"
	I0407 12:14:31.429575 1170330 addons.go:238] Setting addon yakd=true in "addons-660533"
	I0407 12:14:31.427982 1170330 host.go:66] Checking if "addons-660533" exists ...
	I0407 12:14:31.429996 1170330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:14:31.430048 1170330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:14:31.429560 1170330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:14:31.430277 1170330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:14:31.430229 1170330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:14:31.431493 1170330 host.go:66] Checking if "addons-660533" exists ...
	I0407 12:14:31.431645 1170330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:14:31.431699 1170330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:14:31.431935 1170330 host.go:66] Checking if "addons-660533" exists ...
	I0407 12:14:31.428562 1170330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:14:31.432096 1170330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:14:31.432389 1170330 out.go:177] * Verifying Kubernetes components...
	I0407 12:14:31.432614 1170330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:14:31.432805 1170330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:14:31.433784 1170330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:14:31.439014 1170330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 12:14:31.456083 1170330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40553
	I0407 12:14:31.456870 1170330 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:14:31.457076 1170330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38769
	I0407 12:14:31.457206 1170330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46303
	I0407 12:14:31.457596 1170330 main.go:141] libmachine: Using API Version  1
	I0407 12:14:31.457614 1170330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:14:31.457686 1170330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42085
	I0407 12:14:31.457925 1170330 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:14:31.458419 1170330 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:14:31.462431 1170330 main.go:141] libmachine: Using API Version  1
	I0407 12:14:31.462482 1170330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:14:31.462622 1170330 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:14:31.462754 1170330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39535
	I0407 12:14:31.462776 1170330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34441
	I0407 12:14:31.463499 1170330 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:14:31.463773 1170330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:14:31.463822 1170330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:14:31.464433 1170330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:14:31.464477 1170330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:14:31.467128 1170330 main.go:141] libmachine: Using API Version  1
	I0407 12:14:31.467246 1170330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:14:31.467570 1170330 main.go:141] libmachine: Using API Version  1
	I0407 12:14:31.467603 1170330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:14:31.467683 1170330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:14:31.467739 1170330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:14:31.467786 1170330 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:14:31.467862 1170330 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:14:31.473625 1170330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:14:31.473674 1170330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:14:31.474026 1170330 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:14:31.474341 1170330 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:14:31.474807 1170330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:14:31.474862 1170330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:14:31.475012 1170330 main.go:141] libmachine: Using API Version  1
	I0407 12:14:31.475033 1170330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:14:31.476180 1170330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:14:31.476289 1170330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:14:31.476927 1170330 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:14:31.477136 1170330 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:14:31.478019 1170330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:14:31.478117 1170330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:14:31.478696 1170330 main.go:141] libmachine: Using API Version  1
	I0407 12:14:31.478728 1170330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:14:31.479611 1170330 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:14:31.481769 1170330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:14:31.481841 1170330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:14:31.512530 1170330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41969
	I0407 12:14:31.513628 1170330 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:14:31.514673 1170330 main.go:141] libmachine: Using API Version  1
	I0407 12:14:31.514726 1170330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:14:31.515285 1170330 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:14:31.515934 1170330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:14:31.515987 1170330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:14:31.520717 1170330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37267
	I0407 12:14:31.521538 1170330 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:14:31.522263 1170330 main.go:141] libmachine: Using API Version  1
	I0407 12:14:31.522287 1170330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:14:31.523130 1170330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45135
	I0407 12:14:31.523342 1170330 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:14:31.523681 1170330 main.go:141] libmachine: (addons-660533) Calling .GetState
	I0407 12:14:31.525745 1170330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42355
	I0407 12:14:31.526054 1170330 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:14:31.526372 1170330 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:14:31.526474 1170330 host.go:66] Checking if "addons-660533" exists ...
	I0407 12:14:31.526918 1170330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:14:31.526970 1170330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:14:31.527349 1170330 main.go:141] libmachine: Using API Version  1
	I0407 12:14:31.527372 1170330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:14:31.527815 1170330 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:14:31.528074 1170330 main.go:141] libmachine: (addons-660533) Calling .GetState
	I0407 12:14:31.528536 1170330 main.go:141] libmachine: Using API Version  1
	I0407 12:14:31.528556 1170330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:14:31.529076 1170330 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:14:31.529687 1170330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:14:31.529742 1170330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:14:31.530453 1170330 main.go:141] libmachine: (addons-660533) Calling .DriverName
	I0407 12:14:31.530716 1170330 main.go:141] libmachine: Making call to close driver server
	I0407 12:14:31.530734 1170330 main.go:141] libmachine: (addons-660533) Calling .Close
	I0407 12:14:31.533763 1170330 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:14:31.533794 1170330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:14:31.533806 1170330 main.go:141] libmachine: Making call to close driver server
	I0407 12:14:31.533815 1170330 main.go:141] libmachine: (addons-660533) Calling .Close
	I0407 12:14:31.533867 1170330 main.go:141] libmachine: (addons-660533) DBG | Closing plugin on server side
	I0407 12:14:31.534176 1170330 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:14:31.534201 1170330 main.go:141] libmachine: Making call to close connection to plugin binary
	W0407 12:14:31.534343 1170330 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0407 12:14:31.537205 1170330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34529
	I0407 12:14:31.537537 1170330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35837
	I0407 12:14:31.538725 1170330 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:14:31.539619 1170330 main.go:141] libmachine: Using API Version  1
	I0407 12:14:31.539648 1170330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:14:31.540216 1170330 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:14:31.540897 1170330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:14:31.540979 1170330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:14:31.541962 1170330 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:14:31.542115 1170330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44443
	I0407 12:14:31.542916 1170330 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:14:31.543146 1170330 main.go:141] libmachine: Using API Version  1
	I0407 12:14:31.543177 1170330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:14:31.543838 1170330 main.go:141] libmachine: Using API Version  1
	I0407 12:14:31.543866 1170330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:14:31.543878 1170330 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:14:31.544484 1170330 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:14:31.544756 1170330 main.go:141] libmachine: (addons-660533) Calling .GetState
	I0407 12:14:31.546596 1170330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33009
	I0407 12:14:31.546895 1170330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38921
	I0407 12:14:31.547352 1170330 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:14:31.547493 1170330 main.go:141] libmachine: (addons-660533) Calling .DriverName
	I0407 12:14:31.547632 1170330 main.go:141] libmachine: (addons-660533) Calling .GetState
	I0407 12:14:31.547960 1170330 main.go:141] libmachine: Using API Version  1
	I0407 12:14:31.547981 1170330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:14:31.548424 1170330 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:14:31.548540 1170330 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:14:31.549342 1170330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:14:31.549411 1170330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:14:31.549871 1170330 main.go:141] libmachine: Using API Version  1
	I0407 12:14:31.549892 1170330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:14:31.550551 1170330 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:14:31.550670 1170330 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0407 12:14:31.552788 1170330 addons.go:238] Setting addon default-storageclass=true in "addons-660533"
	I0407 12:14:31.552844 1170330 host.go:66] Checking if "addons-660533" exists ...
	I0407 12:14:31.553347 1170330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:14:31.553396 1170330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:14:31.553650 1170330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32775
	I0407 12:14:31.554081 1170330 main.go:141] libmachine: (addons-660533) Calling .GetState
	I0407 12:14:31.556226 1170330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45865
	I0407 12:14:31.556522 1170330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38681
	I0407 12:14:31.557392 1170330 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:14:31.558663 1170330 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:14:31.558802 1170330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34249
	I0407 12:14:31.559127 1170330 main.go:141] libmachine: Using API Version  1
	I0407 12:14:31.559148 1170330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:14:31.560299 1170330 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-660533"
	I0407 12:14:31.560359 1170330 host.go:66] Checking if "addons-660533" exists ...
	I0407 12:14:31.560764 1170330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:14:31.560815 1170330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:14:31.561895 1170330 main.go:141] libmachine: Using API Version  1
	I0407 12:14:31.561931 1170330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:14:31.562048 1170330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37501
	I0407 12:14:31.562799 1170330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38191
	I0407 12:14:31.562989 1170330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46247
	I0407 12:14:31.563072 1170330 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:14:31.563145 1170330 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:14:31.563333 1170330 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:14:31.563516 1170330 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:14:31.563748 1170330 main.go:141] libmachine: Using API Version  1
	I0407 12:14:31.563773 1170330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:14:31.563916 1170330 main.go:141] libmachine: Using API Version  1
	I0407 12:14:31.563926 1170330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:14:31.564295 1170330 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:14:31.564423 1170330 main.go:141] libmachine: Using API Version  1
	I0407 12:14:31.564443 1170330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:14:31.564443 1170330 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:14:31.564711 1170330 main.go:141] libmachine: (addons-660533) Calling .GetState
	I0407 12:14:31.564887 1170330 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:14:31.564888 1170330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:14:31.564937 1170330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:14:31.565153 1170330 main.go:141] libmachine: (addons-660533) Calling .GetState
	I0407 12:14:31.565232 1170330 main.go:141] libmachine: (addons-660533) Calling .GetState
	I0407 12:14:31.565937 1170330 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:14:31.566915 1170330 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:14:31.567166 1170330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:14:31.567238 1170330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:14:31.568041 1170330 main.go:141] libmachine: Using API Version  1
	I0407 12:14:31.568067 1170330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:14:31.568160 1170330 main.go:141] libmachine: (addons-660533) Calling .DriverName
	I0407 12:14:31.568677 1170330 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:14:31.568910 1170330 main.go:141] libmachine: (addons-660533) Calling .DriverName
	I0407 12:14:31.569103 1170330 main.go:141] libmachine: (addons-660533) Calling .GetState
	I0407 12:14:31.570198 1170330 main.go:141] libmachine: (addons-660533) Calling .DriverName
	I0407 12:14:31.571178 1170330 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0407 12:14:31.571213 1170330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0407 12:14:31.571241 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHHostname
	I0407 12:14:31.571640 1170330 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 12:14:31.571686 1170330 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.31
	I0407 12:14:31.571931 1170330 main.go:141] libmachine: (addons-660533) Calling .DriverName
	I0407 12:14:31.572875 1170330 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0407 12:14:31.574431 1170330 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 12:14:31.574458 1170330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0407 12:14:31.574493 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHHostname
	I0407 12:14:31.575220 1170330 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0407 12:14:31.575244 1170330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0407 12:14:31.575272 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHHostname
	I0407 12:14:31.575343 1170330 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:14:31.575924 1170330 main.go:141] libmachine: Using API Version  1
	I0407 12:14:31.575952 1170330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:14:31.576412 1170330 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:14:31.577067 1170330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:14:31.577119 1170330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:14:31.578780 1170330 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0407 12:14:31.579077 1170330 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0407 12:14:31.579096 1170330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0407 12:14:31.579150 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:31.579171 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHHostname
	I0407 12:14:31.579300 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:31.580936 1170330 main.go:141] libmachine: (addons-660533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:96:60", ip: ""} in network mk-addons-660533: {Iface:virbr1 ExpiryTime:2025-04-07 13:13:58 +0000 UTC Type:0 Mac:52:54:00:1e:96:60 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-660533 Clientid:01:52:54:00:1e:96:60}
	I0407 12:14:31.580986 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined IP address 192.168.39.112 and MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:31.581190 1170330 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0407 12:14:31.581216 1170330 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0407 12:14:31.581216 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:31.581248 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHHostname
	I0407 12:14:31.581532 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHPort
	I0407 12:14:31.581616 1170330 main.go:141] libmachine: (addons-660533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:96:60", ip: ""} in network mk-addons-660533: {Iface:virbr1 ExpiryTime:2025-04-07 13:13:58 +0000 UTC Type:0 Mac:52:54:00:1e:96:60 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-660533 Clientid:01:52:54:00:1e:96:60}
	I0407 12:14:31.581632 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined IP address 192.168.39.112 and MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:31.582112 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHKeyPath
	I0407 12:14:31.582176 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHPort
	I0407 12:14:31.582349 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHUsername
	I0407 12:14:31.582385 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHKeyPath
	I0407 12:14:31.582528 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHUsername
	I0407 12:14:31.582541 1170330 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/addons-660533/id_rsa Username:docker}
	I0407 12:14:31.582741 1170330 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/addons-660533/id_rsa Username:docker}
	I0407 12:14:31.583044 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHPort
	I0407 12:14:31.583119 1170330 main.go:141] libmachine: (addons-660533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:96:60", ip: ""} in network mk-addons-660533: {Iface:virbr1 ExpiryTime:2025-04-07 13:13:58 +0000 UTC Type:0 Mac:52:54:00:1e:96:60 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-660533 Clientid:01:52:54:00:1e:96:60}
	I0407 12:14:31.583135 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined IP address 192.168.39.112 and MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:31.583454 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHKeyPath
	I0407 12:14:31.583750 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHUsername
	I0407 12:14:31.584151 1170330 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/addons-660533/id_rsa Username:docker}
	I0407 12:14:31.586342 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:31.586381 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:31.586407 1170330 main.go:141] libmachine: (addons-660533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:96:60", ip: ""} in network mk-addons-660533: {Iface:virbr1 ExpiryTime:2025-04-07 13:13:58 +0000 UTC Type:0 Mac:52:54:00:1e:96:60 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-660533 Clientid:01:52:54:00:1e:96:60}
	I0407 12:14:31.586423 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined IP address 192.168.39.112 and MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:31.586679 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHPort
	I0407 12:14:31.587167 1170330 main.go:141] libmachine: (addons-660533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:96:60", ip: ""} in network mk-addons-660533: {Iface:virbr1 ExpiryTime:2025-04-07 13:13:58 +0000 UTC Type:0 Mac:52:54:00:1e:96:60 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-660533 Clientid:01:52:54:00:1e:96:60}
	I0407 12:14:31.587209 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined IP address 192.168.39.112 and MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:31.587252 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHKeyPath
	I0407 12:14:31.587636 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHPort
	I0407 12:14:31.587699 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHUsername
	I0407 12:14:31.588082 1170330 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/addons-660533/id_rsa Username:docker}
	I0407 12:14:31.588138 1170330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38175
	I0407 12:14:31.588322 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHKeyPath
	I0407 12:14:31.588775 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHUsername
	I0407 12:14:31.589061 1170330 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/addons-660533/id_rsa Username:docker}
	I0407 12:14:31.589620 1170330 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:14:31.590305 1170330 main.go:141] libmachine: Using API Version  1
	I0407 12:14:31.590326 1170330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:14:31.590783 1170330 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:14:31.591003 1170330 main.go:141] libmachine: (addons-660533) Calling .DriverName
	I0407 12:14:31.592803 1170330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35759
	I0407 12:14:31.593267 1170330 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:14:31.593982 1170330 main.go:141] libmachine: Using API Version  1
	I0407 12:14:31.594022 1170330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:14:31.594671 1170330 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:14:31.594946 1170330 main.go:141] libmachine: (addons-660533) Calling .GetState
	I0407 12:14:31.596487 1170330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43973
	I0407 12:14:31.597199 1170330 main.go:141] libmachine: (addons-660533) Calling .DriverName
	I0407 12:14:31.597263 1170330 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:14:31.597821 1170330 main.go:141] libmachine: Using API Version  1
	I0407 12:14:31.597847 1170330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:14:31.598525 1170330 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:14:31.599198 1170330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:14:31.599252 1170330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:14:31.599655 1170330 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0407 12:14:31.601380 1170330 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0407 12:14:31.601413 1170330 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0407 12:14:31.601447 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHHostname
	I0407 12:14:31.605633 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:31.606202 1170330 main.go:141] libmachine: (addons-660533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:96:60", ip: ""} in network mk-addons-660533: {Iface:virbr1 ExpiryTime:2025-04-07 13:13:58 +0000 UTC Type:0 Mac:52:54:00:1e:96:60 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-660533 Clientid:01:52:54:00:1e:96:60}
	I0407 12:14:31.606367 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined IP address 192.168.39.112 and MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:31.606808 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHPort
	I0407 12:14:31.607149 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHKeyPath
	I0407 12:14:31.607365 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHUsername
	I0407 12:14:31.607548 1170330 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/addons-660533/id_rsa Username:docker}
	I0407 12:14:31.616634 1170330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37867
	I0407 12:14:31.617465 1170330 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:14:31.618348 1170330 main.go:141] libmachine: Using API Version  1
	I0407 12:14:31.618380 1170330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:14:31.618935 1170330 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:14:31.619440 1170330 main.go:141] libmachine: (addons-660533) Calling .GetState
	I0407 12:14:31.620576 1170330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34047
	I0407 12:14:31.621350 1170330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34931
	I0407 12:14:31.621547 1170330 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:14:31.621694 1170330 main.go:141] libmachine: (addons-660533) Calling .DriverName
	I0407 12:14:31.622865 1170330 main.go:141] libmachine: Using API Version  1
	I0407 12:14:31.622890 1170330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:14:31.623575 1170330 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:14:31.624088 1170330 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0407 12:14:31.624321 1170330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:14:31.624381 1170330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:14:31.624730 1170330 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:14:31.624854 1170330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44923
	I0407 12:14:31.625083 1170330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33049
	I0407 12:14:31.625680 1170330 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:14:31.625766 1170330 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:14:31.625906 1170330 main.go:141] libmachine: Using API Version  1
	I0407 12:14:31.625919 1170330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:14:31.626442 1170330 main.go:141] libmachine: Using API Version  1
	I0407 12:14:31.626465 1170330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:14:31.626544 1170330 main.go:141] libmachine: Using API Version  1
	I0407 12:14:31.626561 1170330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:14:31.627019 1170330 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:14:31.627091 1170330 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:14:31.627293 1170330 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:14:31.627364 1170330 main.go:141] libmachine: (addons-660533) Calling .GetState
	I0407 12:14:31.627400 1170330 out.go:177]   - Using image docker.io/registry:2.8.3
	I0407 12:14:31.627787 1170330 main.go:141] libmachine: (addons-660533) Calling .GetState
	I0407 12:14:31.627853 1170330 main.go:141] libmachine: (addons-660533) Calling .GetState
	I0407 12:14:31.628654 1170330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35541
	I0407 12:14:31.629506 1170330 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:14:31.629641 1170330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42911
	I0407 12:14:31.629741 1170330 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0407 12:14:31.629762 1170330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0407 12:14:31.629788 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHHostname
	I0407 12:14:31.630584 1170330 main.go:141] libmachine: Using API Version  1
	I0407 12:14:31.630637 1170330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:14:31.630923 1170330 main.go:141] libmachine: (addons-660533) Calling .DriverName
	I0407 12:14:31.631794 1170330 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:14:31.632019 1170330 main.go:141] libmachine: (addons-660533) Calling .GetState
	I0407 12:14:31.632301 1170330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42057
	I0407 12:14:31.632940 1170330 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:14:31.633098 1170330 main.go:141] libmachine: (addons-660533) Calling .DriverName
	I0407 12:14:31.633133 1170330 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:14:31.633252 1170330 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0407 12:14:31.633654 1170330 main.go:141] libmachine: Using API Version  1
	I0407 12:14:31.633681 1170330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:14:31.633792 1170330 main.go:141] libmachine: (addons-660533) Calling .DriverName
	I0407 12:14:31.634038 1170330 main.go:141] libmachine: Using API Version  1
	I0407 12:14:31.634305 1170330 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:14:31.634058 1170330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:14:31.634358 1170330 main.go:141] libmachine: (addons-660533) Calling .DriverName
	I0407 12:14:31.634197 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:31.634678 1170330 main.go:141] libmachine: (addons-660533) Calling .GetState
	I0407 12:14:31.634835 1170330 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:14:31.634874 1170330 main.go:141] libmachine: (addons-660533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:96:60", ip: ""} in network mk-addons-660533: {Iface:virbr1 ExpiryTime:2025-04-07 13:13:58 +0000 UTC Type:0 Mac:52:54:00:1e:96:60 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-660533 Clientid:01:52:54:00:1e:96:60}
	I0407 12:14:31.634895 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined IP address 192.168.39.112 and MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:31.635136 1170330 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0407 12:14:31.635248 1170330 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0407 12:14:31.635270 1170330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0407 12:14:31.635291 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHHostname
	I0407 12:14:31.635852 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHPort
	I0407 12:14:31.635886 1170330 main.go:141] libmachine: (addons-660533) Calling .GetState
	I0407 12:14:31.636062 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHKeyPath
	I0407 12:14:31.636244 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHUsername
	I0407 12:14:31.636392 1170330 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/addons-660533/id_rsa Username:docker}
	I0407 12:14:31.636867 1170330 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0407 12:14:31.636916 1170330 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0407 12:14:31.636967 1170330 main.go:141] libmachine: (addons-660533) Calling .DriverName
	I0407 12:14:31.639347 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:31.639409 1170330 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0407 12:14:31.639429 1170330 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0407 12:14:31.639523 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHHostname
	I0407 12:14:31.639549 1170330 out.go:177]   - Using image docker.io/busybox:stable
	I0407 12:14:31.639663 1170330 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0407 12:14:31.639913 1170330 main.go:141] libmachine: (addons-660533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:96:60", ip: ""} in network mk-addons-660533: {Iface:virbr1 ExpiryTime:2025-04-07 13:13:58 +0000 UTC Type:0 Mac:52:54:00:1e:96:60 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-660533 Clientid:01:52:54:00:1e:96:60}
	I0407 12:14:31.639932 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined IP address 192.168.39.112 and MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:31.640066 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHPort
	I0407 12:14:31.640325 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHKeyPath
	I0407 12:14:31.640455 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHUsername
	I0407 12:14:31.640582 1170330 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/addons-660533/id_rsa Username:docker}
	I0407 12:14:31.641072 1170330 main.go:141] libmachine: (addons-660533) Calling .DriverName
	I0407 12:14:31.641408 1170330 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0407 12:14:31.641529 1170330 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0407 12:14:31.641546 1170330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0407 12:14:31.641563 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHHostname
	I0407 12:14:31.641791 1170330 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0407 12:14:31.641814 1170330 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0407 12:14:31.641836 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHHostname
	I0407 12:14:31.642978 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:31.643515 1170330 main.go:141] libmachine: (addons-660533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:96:60", ip: ""} in network mk-addons-660533: {Iface:virbr1 ExpiryTime:2025-04-07 13:13:58 +0000 UTC Type:0 Mac:52:54:00:1e:96:60 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-660533 Clientid:01:52:54:00:1e:96:60}
	I0407 12:14:31.643543 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined IP address 192.168.39.112 and MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:31.643636 1170330 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0407 12:14:31.643878 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHPort
	I0407 12:14:31.644198 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHKeyPath
	I0407 12:14:31.644424 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHUsername
	I0407 12:14:31.644596 1170330 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/addons-660533/id_rsa Username:docker}
	I0407 12:14:31.645696 1170330 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0407 12:14:31.646600 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:31.647714 1170330 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0407 12:14:31.647744 1170330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0407 12:14:31.647772 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHHostname
	I0407 12:14:31.647778 1170330 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0407 12:14:31.647907 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:31.648143 1170330 main.go:141] libmachine: (addons-660533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:96:60", ip: ""} in network mk-addons-660533: {Iface:virbr1 ExpiryTime:2025-04-07 13:13:58 +0000 UTC Type:0 Mac:52:54:00:1e:96:60 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-660533 Clientid:01:52:54:00:1e:96:60}
	I0407 12:14:31.648166 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined IP address 192.168.39.112 and MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:31.648429 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHPort
	I0407 12:14:31.648895 1170330 main.go:141] libmachine: (addons-660533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:96:60", ip: ""} in network mk-addons-660533: {Iface:virbr1 ExpiryTime:2025-04-07 13:13:58 +0000 UTC Type:0 Mac:52:54:00:1e:96:60 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-660533 Clientid:01:52:54:00:1e:96:60}
	I0407 12:14:31.649026 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHKeyPath
	I0407 12:14:31.649212 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHUsername
	I0407 12:14:31.649356 1170330 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/addons-660533/id_rsa Username:docker}
	I0407 12:14:31.649796 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined IP address 192.168.39.112 and MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:31.650034 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHPort
	I0407 12:14:31.650271 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHKeyPath
	I0407 12:14:31.650423 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHUsername
	I0407 12:14:31.650654 1170330 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/addons-660533/id_rsa Username:docker}
	I0407 12:14:31.651723 1170330 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0407 12:14:31.651878 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:31.652356 1170330 main.go:141] libmachine: (addons-660533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:96:60", ip: ""} in network mk-addons-660533: {Iface:virbr1 ExpiryTime:2025-04-07 13:13:58 +0000 UTC Type:0 Mac:52:54:00:1e:96:60 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-660533 Clientid:01:52:54:00:1e:96:60}
	I0407 12:14:31.652375 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined IP address 192.168.39.112 and MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:31.652608 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHPort
	I0407 12:14:31.652885 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHKeyPath
	I0407 12:14:31.653077 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHUsername
	I0407 12:14:31.653201 1170330 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/addons-660533/id_rsa Username:docker}
	I0407 12:14:31.654794 1170330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36073
	I0407 12:14:31.655530 1170330 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:14:31.656238 1170330 main.go:141] libmachine: Using API Version  1
	I0407 12:14:31.656273 1170330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:14:31.656306 1170330 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0407 12:14:31.656759 1170330 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:14:31.657111 1170330 main.go:141] libmachine: (addons-660533) Calling .GetState
	W0407 12:14:31.657313 1170330 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:48406->192.168.39.112:22: read: connection reset by peer
	I0407 12:14:31.657376 1170330 retry.go:31] will retry after 198.456635ms: ssh: handshake failed: read tcp 192.168.39.1:48406->192.168.39.112:22: read: connection reset by peer
	I0407 12:14:31.659696 1170330 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0407 12:14:31.661465 1170330 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0407 12:14:31.662988 1170330 main.go:141] libmachine: (addons-660533) Calling .DriverName
	I0407 12:14:31.663395 1170330 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0407 12:14:31.663420 1170330 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0407 12:14:31.663447 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHHostname
	I0407 12:14:31.665521 1170330 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0407 12:14:31.667464 1170330 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0407 12:14:31.667841 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:31.668488 1170330 main.go:141] libmachine: (addons-660533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:96:60", ip: ""} in network mk-addons-660533: {Iface:virbr1 ExpiryTime:2025-04-07 13:13:58 +0000 UTC Type:0 Mac:52:54:00:1e:96:60 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-660533 Clientid:01:52:54:00:1e:96:60}
	I0407 12:14:31.668518 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined IP address 192.168.39.112 and MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:31.668755 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHPort
	I0407 12:14:31.669018 1170330 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0407 12:14:31.669057 1170330 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0407 12:14:31.669081 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHHostname
	I0407 12:14:31.669032 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHKeyPath
	I0407 12:14:31.669376 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHUsername
	I0407 12:14:31.669562 1170330 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/addons-660533/id_rsa Username:docker}
	I0407 12:14:31.673202 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:31.673879 1170330 main.go:141] libmachine: (addons-660533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:96:60", ip: ""} in network mk-addons-660533: {Iface:virbr1 ExpiryTime:2025-04-07 13:13:58 +0000 UTC Type:0 Mac:52:54:00:1e:96:60 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-660533 Clientid:01:52:54:00:1e:96:60}
	I0407 12:14:31.673919 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined IP address 192.168.39.112 and MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:31.674100 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHPort
	I0407 12:14:31.674367 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHKeyPath
	I0407 12:14:31.674651 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHUsername
	I0407 12:14:31.674832 1170330 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/addons-660533/id_rsa Username:docker}
	W0407 12:14:31.857441 1170330 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:48438->192.168.39.112:22: read: connection reset by peer
	I0407 12:14:31.857485 1170330 retry.go:31] will retry after 337.025801ms: ssh: handshake failed: read tcp 192.168.39.1:48438->192.168.39.112:22: read: connection reset by peer
	I0407 12:14:31.995552 1170330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 12:14:31.995608 1170330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0407 12:14:32.021977 1170330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 12:14:32.070398 1170330 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0407 12:14:32.070427 1170330 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0407 12:14:32.158277 1170330 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0407 12:14:32.158319 1170330 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0407 12:14:32.158717 1170330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0407 12:14:32.194266 1170330 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0407 12:14:32.194293 1170330 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0407 12:14:32.196995 1170330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0407 12:14:32.199562 1170330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0407 12:14:32.203121 1170330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0407 12:14:32.206629 1170330 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0407 12:14:32.206665 1170330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0407 12:14:32.213400 1170330 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0407 12:14:32.213431 1170330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0407 12:14:32.215427 1170330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0407 12:14:32.262988 1170330 node_ready.go:35] waiting up to 6m0s for node "addons-660533" to be "Ready" ...
	I0407 12:14:32.266830 1170330 node_ready.go:49] node "addons-660533" has status "Ready":"True"
	I0407 12:14:32.266862 1170330 node_ready.go:38] duration metric: took 3.838975ms for node "addons-660533" to be "Ready" ...
	I0407 12:14:32.266873 1170330 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 12:14:32.274504 1170330 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-286nb" in "kube-system" namespace to be "Ready" ...
	I0407 12:14:32.291522 1170330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0407 12:14:32.293889 1170330 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0407 12:14:32.293969 1170330 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0407 12:14:32.341662 1170330 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0407 12:14:32.341698 1170330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0407 12:14:32.372504 1170330 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0407 12:14:32.372538 1170330 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0407 12:14:32.453209 1170330 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0407 12:14:32.453249 1170330 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0407 12:14:32.465561 1170330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0407 12:14:32.467926 1170330 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0407 12:14:32.467987 1170330 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0407 12:14:32.611542 1170330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0407 12:14:32.626483 1170330 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0407 12:14:32.626522 1170330 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0407 12:14:32.690785 1170330 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0407 12:14:32.690832 1170330 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0407 12:14:32.751997 1170330 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0407 12:14:32.752029 1170330 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0407 12:14:32.770231 1170330 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0407 12:14:32.770264 1170330 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0407 12:14:32.819132 1170330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0407 12:14:32.846654 1170330 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0407 12:14:32.846695 1170330 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0407 12:14:32.867169 1170330 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0407 12:14:32.867202 1170330 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0407 12:14:32.949018 1170330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0407 12:14:33.011780 1170330 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0407 12:14:33.011806 1170330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0407 12:14:33.095999 1170330 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0407 12:14:33.096027 1170330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0407 12:14:33.137670 1170330 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0407 12:14:33.137717 1170330 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0407 12:14:33.283697 1170330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0407 12:14:33.416721 1170330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0407 12:14:33.486758 1170330 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0407 12:14:33.486807 1170330 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0407 12:14:33.956489 1170330 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0407 12:14:33.956530 1170330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0407 12:14:34.123819 1170330 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0407 12:14:34.123884 1170330 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0407 12:14:34.254415 1170330 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0407 12:14:34.254452 1170330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0407 12:14:34.504587 1170330 pod_ready.go:103] pod "coredns-668d6bf9bc-286nb" in "kube-system" namespace has status "Ready":"False"
	I0407 12:14:34.513209 1170330 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.517550034s)
	I0407 12:14:34.513255 1170330 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0407 12:14:34.758957 1170330 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0407 12:14:34.758992 1170330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0407 12:14:35.029827 1170330 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-660533" context rescaled to 1 replicas
	I0407 12:14:35.131924 1170330 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0407 12:14:35.131964 1170330 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0407 12:14:35.532531 1170330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0407 12:14:36.849462 1170330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.827413619s)
	I0407 12:14:36.849522 1170330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.690763693s)
	I0407 12:14:36.849551 1170330 main.go:141] libmachine: Making call to close driver server
	I0407 12:14:36.849565 1170330 main.go:141] libmachine: (addons-660533) Calling .Close
	I0407 12:14:36.849572 1170330 main.go:141] libmachine: Making call to close driver server
	I0407 12:14:36.849588 1170330 main.go:141] libmachine: (addons-660533) Calling .Close
	I0407 12:14:36.849677 1170330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.652636019s)
	I0407 12:14:36.849737 1170330 main.go:141] libmachine: Making call to close driver server
	I0407 12:14:36.849752 1170330 main.go:141] libmachine: (addons-660533) Calling .Close
	I0407 12:14:36.849779 1170330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.650179653s)
	I0407 12:14:36.849883 1170330 main.go:141] libmachine: Making call to close driver server
	I0407 12:14:36.849909 1170330 main.go:141] libmachine: (addons-660533) Calling .Close
	I0407 12:14:36.849991 1170330 main.go:141] libmachine: (addons-660533) DBG | Closing plugin on server side
	I0407 12:14:36.850048 1170330 main.go:141] libmachine: (addons-660533) DBG | Closing plugin on server side
	I0407 12:14:36.849885 1170330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.64672552s)
	I0407 12:14:36.850077 1170330 main.go:141] libmachine: Making call to close driver server
	I0407 12:14:36.850084 1170330 main.go:141] libmachine: (addons-660533) Calling .Close
	I0407 12:14:36.850132 1170330 main.go:141] libmachine: (addons-660533) DBG | Closing plugin on server side
	I0407 12:14:36.850170 1170330 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:14:36.850178 1170330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:14:36.850177 1170330 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:14:36.850186 1170330 main.go:141] libmachine: Making call to close driver server
	I0407 12:14:36.850189 1170330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:14:36.850194 1170330 main.go:141] libmachine: (addons-660533) Calling .Close
	I0407 12:14:36.850198 1170330 main.go:141] libmachine: Making call to close driver server
	I0407 12:14:36.850205 1170330 main.go:141] libmachine: (addons-660533) Calling .Close
	I0407 12:14:36.849927 1170330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.63447173s)
	I0407 12:14:36.852754 1170330 main.go:141] libmachine: Making call to close driver server
	I0407 12:14:36.852777 1170330 main.go:141] libmachine: (addons-660533) Calling .Close
	I0407 12:14:36.850320 1170330 main.go:141] libmachine: (addons-660533) DBG | Closing plugin on server side
	I0407 12:14:36.850342 1170330 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:14:36.852904 1170330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:14:36.850362 1170330 main.go:141] libmachine: (addons-660533) DBG | Closing plugin on server side
	I0407 12:14:36.850380 1170330 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:14:36.850483 1170330 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:14:36.850519 1170330 main.go:141] libmachine: (addons-660533) DBG | Closing plugin on server side
	I0407 12:14:36.850536 1170330 main.go:141] libmachine: (addons-660533) DBG | Closing plugin on server side
	I0407 12:14:36.852672 1170330 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:14:36.853002 1170330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:14:36.853058 1170330 main.go:141] libmachine: Making call to close driver server
	I0407 12:14:36.853080 1170330 main.go:141] libmachine: (addons-660533) Calling .Close
	I0407 12:14:36.853171 1170330 main.go:141] libmachine: (addons-660533) DBG | Closing plugin on server side
	I0407 12:14:36.853261 1170330 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:14:36.853302 1170330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:14:36.853314 1170330 main.go:141] libmachine: Making call to close driver server
	I0407 12:14:36.853332 1170330 main.go:141] libmachine: (addons-660533) Calling .Close
	I0407 12:14:36.853440 1170330 main.go:141] libmachine: (addons-660533) DBG | Closing plugin on server side
	I0407 12:14:36.853012 1170330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:14:36.853521 1170330 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:14:36.853537 1170330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:14:36.852955 1170330 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:14:36.853629 1170330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:14:36.853643 1170330 main.go:141] libmachine: Making call to close driver server
	I0407 12:14:36.853651 1170330 main.go:141] libmachine: (addons-660533) Calling .Close
	I0407 12:14:36.853019 1170330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:14:36.853032 1170330 main.go:141] libmachine: Making call to close driver server
	I0407 12:14:36.853768 1170330 main.go:141] libmachine: (addons-660533) Calling .Close
	I0407 12:14:36.853496 1170330 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:14:36.854210 1170330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:14:36.854418 1170330 main.go:141] libmachine: (addons-660533) DBG | Closing plugin on server side
	I0407 12:14:36.854449 1170330 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:14:36.854455 1170330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:14:36.854624 1170330 main.go:141] libmachine: (addons-660533) DBG | Closing plugin on server side
	I0407 12:14:36.854659 1170330 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:14:36.854672 1170330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:14:36.864701 1170330 pod_ready.go:103] pod "coredns-668d6bf9bc-286nb" in "kube-system" namespace has status "Ready":"False"
	I0407 12:14:36.957418 1170330 main.go:141] libmachine: Making call to close driver server
	I0407 12:14:36.957445 1170330 main.go:141] libmachine: (addons-660533) Calling .Close
	I0407 12:14:36.957922 1170330 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:14:36.957975 1170330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:14:37.407831 1170330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.11626062s)
	I0407 12:14:37.407900 1170330 main.go:141] libmachine: Making call to close driver server
	I0407 12:14:37.407911 1170330 main.go:141] libmachine: (addons-660533) Calling .Close
	I0407 12:14:37.408420 1170330 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:14:37.408444 1170330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:14:37.408455 1170330 main.go:141] libmachine: Making call to close driver server
	I0407 12:14:37.408465 1170330 main.go:141] libmachine: (addons-660533) Calling .Close
	I0407 12:14:37.408714 1170330 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:14:37.408730 1170330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:14:37.605962 1170330 main.go:141] libmachine: Making call to close driver server
	I0407 12:14:37.606005 1170330 main.go:141] libmachine: (addons-660533) Calling .Close
	I0407 12:14:37.606450 1170330 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:14:37.606477 1170330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:14:38.118123 1170330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.652510541s)
	I0407 12:14:38.118194 1170330 main.go:141] libmachine: Making call to close driver server
	I0407 12:14:38.118209 1170330 main.go:141] libmachine: (addons-660533) Calling .Close
	I0407 12:14:38.118242 1170330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.506653902s)
	I0407 12:14:38.118309 1170330 main.go:141] libmachine: Making call to close driver server
	I0407 12:14:38.118331 1170330 main.go:141] libmachine: (addons-660533) Calling .Close
	I0407 12:14:38.118609 1170330 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:14:38.118627 1170330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:14:38.118639 1170330 main.go:141] libmachine: Making call to close driver server
	I0407 12:14:38.118647 1170330 main.go:141] libmachine: (addons-660533) Calling .Close
	I0407 12:14:38.118770 1170330 main.go:141] libmachine: (addons-660533) DBG | Closing plugin on server side
	I0407 12:14:38.118829 1170330 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:14:38.118843 1170330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:14:38.118852 1170330 main.go:141] libmachine: Making call to close driver server
	I0407 12:14:38.118864 1170330 main.go:141] libmachine: (addons-660533) Calling .Close
	I0407 12:14:38.118911 1170330 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:14:38.118934 1170330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:14:38.118949 1170330 addons.go:479] Verifying addon registry=true in "addons-660533"
	I0407 12:14:38.121277 1170330 out.go:177] * Verifying registry addon...
	I0407 12:14:38.121335 1170330 main.go:141] libmachine: (addons-660533) DBG | Closing plugin on server side
	I0407 12:14:38.121311 1170330 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:14:38.121359 1170330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:14:38.124262 1170330 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0407 12:14:38.135871 1170330 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0407 12:14:38.135912 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:38.445064 1170330 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0407 12:14:38.445120 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHHostname
	I0407 12:14:38.449093 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:38.449949 1170330 main.go:141] libmachine: (addons-660533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:96:60", ip: ""} in network mk-addons-660533: {Iface:virbr1 ExpiryTime:2025-04-07 13:13:58 +0000 UTC Type:0 Mac:52:54:00:1e:96:60 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-660533 Clientid:01:52:54:00:1e:96:60}
	I0407 12:14:38.449986 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined IP address 192.168.39.112 and MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:38.450274 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHPort
	I0407 12:14:38.450547 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHKeyPath
	I0407 12:14:38.450780 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHUsername
	I0407 12:14:38.450956 1170330 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/addons-660533/id_rsa Username:docker}
	I0407 12:14:38.725756 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:38.970695 1170330 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0407 12:14:39.090115 1170330 addons.go:238] Setting addon gcp-auth=true in "addons-660533"
	I0407 12:14:39.090188 1170330 host.go:66] Checking if "addons-660533" exists ...
	I0407 12:14:39.090535 1170330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:14:39.090576 1170330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:14:39.109045 1170330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40715
	I0407 12:14:39.109938 1170330 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:14:39.110608 1170330 main.go:141] libmachine: Using API Version  1
	I0407 12:14:39.110641 1170330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:14:39.111256 1170330 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:14:39.112165 1170330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:14:39.112216 1170330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:14:39.129953 1170330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43025
	I0407 12:14:39.130641 1170330 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:14:39.130943 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:39.131429 1170330 main.go:141] libmachine: Using API Version  1
	I0407 12:14:39.131449 1170330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:14:39.132094 1170330 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:14:39.132420 1170330 main.go:141] libmachine: (addons-660533) Calling .GetState
	I0407 12:14:39.134979 1170330 main.go:141] libmachine: (addons-660533) Calling .DriverName
	I0407 12:14:39.135531 1170330 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0407 12:14:39.135562 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHHostname
	I0407 12:14:39.141245 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:39.141852 1170330 main.go:141] libmachine: (addons-660533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:96:60", ip: ""} in network mk-addons-660533: {Iface:virbr1 ExpiryTime:2025-04-07 13:13:58 +0000 UTC Type:0 Mac:52:54:00:1e:96:60 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-660533 Clientid:01:52:54:00:1e:96:60}
	I0407 12:14:39.141898 1170330 main.go:141] libmachine: (addons-660533) DBG | domain addons-660533 has defined IP address 192.168.39.112 and MAC address 52:54:00:1e:96:60 in network mk-addons-660533
	I0407 12:14:39.142208 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHPort
	I0407 12:14:39.142626 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHKeyPath
	I0407 12:14:39.142891 1170330 main.go:141] libmachine: (addons-660533) Calling .GetSSHUsername
	I0407 12:14:39.143144 1170330 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/addons-660533/id_rsa Username:docker}
	I0407 12:14:39.333492 1170330 pod_ready.go:93] pod "coredns-668d6bf9bc-286nb" in "kube-system" namespace has status "Ready":"True"
	I0407 12:14:39.333530 1170330 pod_ready.go:82] duration metric: took 7.058973538s for pod "coredns-668d6bf9bc-286nb" in "kube-system" namespace to be "Ready" ...
	I0407 12:14:39.333541 1170330 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-cnkpx" in "kube-system" namespace to be "Ready" ...
	I0407 12:14:39.418786 1170330 pod_ready.go:93] pod "coredns-668d6bf9bc-cnkpx" in "kube-system" namespace has status "Ready":"True"
	I0407 12:14:39.418820 1170330 pod_ready.go:82] duration metric: took 85.271887ms for pod "coredns-668d6bf9bc-cnkpx" in "kube-system" namespace to be "Ready" ...
	I0407 12:14:39.418834 1170330 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-660533" in "kube-system" namespace to be "Ready" ...
	I0407 12:14:39.473389 1170330 pod_ready.go:93] pod "etcd-addons-660533" in "kube-system" namespace has status "Ready":"True"
	I0407 12:14:39.473432 1170330 pod_ready.go:82] duration metric: took 54.589284ms for pod "etcd-addons-660533" in "kube-system" namespace to be "Ready" ...
	I0407 12:14:39.473448 1170330 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-660533" in "kube-system" namespace to be "Ready" ...
	I0407 12:14:39.528548 1170330 pod_ready.go:93] pod "kube-apiserver-addons-660533" in "kube-system" namespace has status "Ready":"True"
	I0407 12:14:39.528586 1170330 pod_ready.go:82] duration metric: took 55.126683ms for pod "kube-apiserver-addons-660533" in "kube-system" namespace to be "Ready" ...
	I0407 12:14:39.528602 1170330 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-660533" in "kube-system" namespace to be "Ready" ...
	I0407 12:14:39.546862 1170330 pod_ready.go:93] pod "kube-controller-manager-addons-660533" in "kube-system" namespace has status "Ready":"True"
	I0407 12:14:39.546903 1170330 pod_ready.go:82] duration metric: took 18.292092ms for pod "kube-controller-manager-addons-660533" in "kube-system" namespace to be "Ready" ...
	I0407 12:14:39.546920 1170330 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fz5dl" in "kube-system" namespace to be "Ready" ...
	I0407 12:14:39.630785 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:39.706174 1170330 pod_ready.go:93] pod "kube-proxy-fz5dl" in "kube-system" namespace has status "Ready":"True"
	I0407 12:14:39.706215 1170330 pod_ready.go:82] duration metric: took 159.286539ms for pod "kube-proxy-fz5dl" in "kube-system" namespace to be "Ready" ...
	I0407 12:14:39.706230 1170330 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-660533" in "kube-system" namespace to be "Ready" ...
	I0407 12:14:40.080317 1170330 pod_ready.go:93] pod "kube-scheduler-addons-660533" in "kube-system" namespace has status "Ready":"True"
	I0407 12:14:40.080359 1170330 pod_ready.go:82] duration metric: took 374.111158ms for pod "kube-scheduler-addons-660533" in "kube-system" namespace to be "Ready" ...
	I0407 12:14:40.080383 1170330 pod_ready.go:39] duration metric: took 7.81349182s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 12:14:40.080409 1170330 api_server.go:52] waiting for apiserver process to appear ...
	I0407 12:14:40.080483 1170330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:14:40.134305 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:40.628015 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:41.233571 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:41.301987 1170330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.48280294s)
	I0407 12:14:41.302042 1170330 main.go:141] libmachine: Making call to close driver server
	I0407 12:14:41.302051 1170330 main.go:141] libmachine: (addons-660533) Calling .Close
	I0407 12:14:41.302132 1170330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.353057242s)
	I0407 12:14:41.302192 1170330 main.go:141] libmachine: Making call to close driver server
	I0407 12:14:41.302196 1170330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.018442806s)
	I0407 12:14:41.302207 1170330 main.go:141] libmachine: (addons-660533) Calling .Close
	I0407 12:14:41.302226 1170330 main.go:141] libmachine: Making call to close driver server
	I0407 12:14:41.302265 1170330 main.go:141] libmachine: (addons-660533) Calling .Close
	I0407 12:14:41.302332 1170330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.885559087s)
	W0407 12:14:41.302529 1170330 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0407 12:14:41.302556 1170330 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:14:41.302563 1170330 main.go:141] libmachine: (addons-660533) DBG | Closing plugin on server side
	I0407 12:14:41.302570 1170330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:14:41.302567 1170330 retry.go:31] will retry after 145.598695ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0407 12:14:41.302579 1170330 main.go:141] libmachine: Making call to close driver server
	I0407 12:14:41.302587 1170330 main.go:141] libmachine: (addons-660533) Calling .Close
	I0407 12:14:41.302687 1170330 main.go:141] libmachine: (addons-660533) DBG | Closing plugin on server side
	I0407 12:14:41.302729 1170330 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:14:41.302758 1170330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:14:41.302771 1170330 main.go:141] libmachine: Making call to close driver server
	I0407 12:14:41.302778 1170330 main.go:141] libmachine: (addons-660533) Calling .Close
	I0407 12:14:41.302954 1170330 main.go:141] libmachine: (addons-660533) DBG | Closing plugin on server side
	I0407 12:14:41.303047 1170330 main.go:141] libmachine: (addons-660533) DBG | Closing plugin on server side
	I0407 12:14:41.303104 1170330 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:14:41.303199 1170330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:14:41.303216 1170330 addons.go:479] Verifying addon ingress=true in "addons-660533"
	I0407 12:14:41.303158 1170330 main.go:141] libmachine: (addons-660533) DBG | Closing plugin on server side
	I0407 12:14:41.303180 1170330 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:14:41.304604 1170330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:14:41.305040 1170330 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:14:41.305066 1170330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:14:41.305077 1170330 main.go:141] libmachine: Making call to close driver server
	I0407 12:14:41.305087 1170330 main.go:141] libmachine: (addons-660533) Calling .Close
	I0407 12:14:41.305430 1170330 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:14:41.305448 1170330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:14:41.305462 1170330 addons.go:479] Verifying addon metrics-server=true in "addons-660533"
	I0407 12:14:41.305613 1170330 out.go:177] * Verifying ingress addon...
	I0407 12:14:41.306556 1170330 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-660533 service yakd-dashboard -n yakd-dashboard
	
	I0407 12:14:41.308554 1170330 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0407 12:14:41.357472 1170330 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0407 12:14:41.357506 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:41.448452 1170330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0407 12:14:41.629483 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:41.813900 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:42.130917 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:42.318619 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:42.672633 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:42.850194 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:42.886300 1170330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.353692656s)
	I0407 12:14:42.886343 1170330 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.750782683s)
	I0407 12:14:42.886379 1170330 main.go:141] libmachine: Making call to close driver server
	I0407 12:14:42.886400 1170330 main.go:141] libmachine: (addons-660533) Calling .Close
	I0407 12:14:42.886428 1170330 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.805920856s)
	I0407 12:14:42.886462 1170330 api_server.go:72] duration metric: took 11.458793197s to wait for apiserver process to appear ...
	I0407 12:14:42.886474 1170330 api_server.go:88] waiting for apiserver healthz status ...
	I0407 12:14:42.886496 1170330 api_server.go:253] Checking apiserver healthz at https://192.168.39.112:8443/healthz ...
	I0407 12:14:42.886691 1170330 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:14:42.886711 1170330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:14:42.886723 1170330 main.go:141] libmachine: Making call to close driver server
	I0407 12:14:42.886731 1170330 main.go:141] libmachine: (addons-660533) Calling .Close
	I0407 12:14:42.887116 1170330 main.go:141] libmachine: (addons-660533) DBG | Closing plugin on server side
	I0407 12:14:42.887147 1170330 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:14:42.887163 1170330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:14:42.887186 1170330 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-660533"
	I0407 12:14:42.889115 1170330 out.go:177] * Verifying csi-hostpath-driver addon...
	I0407 12:14:42.889120 1170330 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0407 12:14:42.891691 1170330 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0407 12:14:42.892587 1170330 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0407 12:14:42.893145 1170330 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0407 12:14:42.893170 1170330 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0407 12:14:42.898695 1170330 api_server.go:279] https://192.168.39.112:8443/healthz returned 200:
	ok
	I0407 12:14:42.903757 1170330 api_server.go:141] control plane version: v1.32.2
	I0407 12:14:42.903803 1170330 api_server.go:131] duration metric: took 17.319743ms to wait for apiserver health ...
	I0407 12:14:42.903817 1170330 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 12:14:42.922133 1170330 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0407 12:14:42.922173 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:14:42.922951 1170330 system_pods.go:59] 19 kube-system pods found
	I0407 12:14:42.923015 1170330 system_pods.go:61] "amd-gpu-device-plugin-vhjpx" [c3f03dee-8b86-4818-b32a-e5a77c247e53] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0407 12:14:42.923026 1170330 system_pods.go:61] "coredns-668d6bf9bc-286nb" [1cc47b54-1095-48cd-8ad6-60f229c3b136] Running
	I0407 12:14:42.923035 1170330 system_pods.go:61] "coredns-668d6bf9bc-cnkpx" [aef46db8-e8ba-4d7b-8aa1-0e4fa9e424b9] Running
	I0407 12:14:42.923043 1170330 system_pods.go:61] "csi-hostpath-attacher-0" [a116b593-276f-42e4-87c0-feb92ec4e669] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0407 12:14:42.923054 1170330 system_pods.go:61] "csi-hostpath-resizer-0" [0cae20f8-5a1f-4689-9981-0459d2124509] Pending
	I0407 12:14:42.923063 1170330 system_pods.go:61] "csi-hostpathplugin-clwtl" [325cb036-4c02-4938-a0f0-36d27f633ff8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0407 12:14:42.923069 1170330 system_pods.go:61] "etcd-addons-660533" [ffdeacc4-c8e2-400d-b386-9dc67537b513] Running
	I0407 12:14:42.923076 1170330 system_pods.go:61] "kube-apiserver-addons-660533" [ea9559bb-883a-440c-9f04-3987a9f7d9ff] Running
	I0407 12:14:42.923083 1170330 system_pods.go:61] "kube-controller-manager-addons-660533" [e2ac8df9-9bed-4f67-be00-94c038c8dace] Running
	I0407 12:14:42.923093 1170330 system_pods.go:61] "kube-ingress-dns-minikube" [63b69b2e-c0db-49af-9871-d106be2e08e8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0407 12:14:42.923103 1170330 system_pods.go:61] "kube-proxy-fz5dl" [aa0dbf0c-4444-4f25-a3c3-41368949c06d] Running
	I0407 12:14:42.923111 1170330 system_pods.go:61] "kube-scheduler-addons-660533" [c14c7a40-cdc1-4d28-ad70-8e89bb070e23] Running
	I0407 12:14:42.923119 1170330 system_pods.go:61] "metrics-server-7fbb699795-hrk7g" [eee024d4-c7d8-46c7-82e3-d5ad8e36eccb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0407 12:14:42.923130 1170330 system_pods.go:61] "nvidia-device-plugin-daemonset-rds5h" [b2520ae3-ad27-4503-9c27-7aff9b16771f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0407 12:14:42.923145 1170330 system_pods.go:61] "registry-6c88467877-cbcx8" [e8f82417-0cbb-4261-b17c-98dd81f33a21] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0407 12:14:42.923154 1170330 system_pods.go:61] "registry-proxy-jjr6j" [c51e7db8-05e9-4bf1-8b27-d01380c2388b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0407 12:14:42.923168 1170330 system_pods.go:61] "snapshot-controller-68b874b76f-724qh" [b9c50f5a-77e6-4e0a-bf6b-274f57699e6f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0407 12:14:42.923180 1170330 system_pods.go:61] "snapshot-controller-68b874b76f-vfx9j" [f8ea4adb-6b48-49a3-9b0a-30d3dbc78173] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0407 12:14:42.923190 1170330 system_pods.go:61] "storage-provisioner" [f649b84e-e785-4204-bb89-5dff296307a9] Running
	I0407 12:14:42.923199 1170330 system_pods.go:74] duration metric: took 19.373963ms to wait for pod list to return data ...
	I0407 12:14:42.923215 1170330 default_sa.go:34] waiting for default service account to be created ...
	I0407 12:14:42.954215 1170330 default_sa.go:45] found service account: "default"
	I0407 12:14:42.954251 1170330 default_sa.go:55] duration metric: took 31.025836ms for default service account to be created ...
	I0407 12:14:42.954261 1170330 system_pods.go:116] waiting for k8s-apps to be running ...
	I0407 12:14:42.985325 1170330 system_pods.go:86] 19 kube-system pods found
	I0407 12:14:42.985380 1170330 system_pods.go:89] "amd-gpu-device-plugin-vhjpx" [c3f03dee-8b86-4818-b32a-e5a77c247e53] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0407 12:14:42.985390 1170330 system_pods.go:89] "coredns-668d6bf9bc-286nb" [1cc47b54-1095-48cd-8ad6-60f229c3b136] Running
	I0407 12:14:42.985401 1170330 system_pods.go:89] "coredns-668d6bf9bc-cnkpx" [aef46db8-e8ba-4d7b-8aa1-0e4fa9e424b9] Running
	I0407 12:14:42.985413 1170330 system_pods.go:89] "csi-hostpath-attacher-0" [a116b593-276f-42e4-87c0-feb92ec4e669] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0407 12:14:42.985418 1170330 system_pods.go:89] "csi-hostpath-resizer-0" [0cae20f8-5a1f-4689-9981-0459d2124509] Pending
	I0407 12:14:42.985428 1170330 system_pods.go:89] "csi-hostpathplugin-clwtl" [325cb036-4c02-4938-a0f0-36d27f633ff8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0407 12:14:42.985437 1170330 system_pods.go:89] "etcd-addons-660533" [ffdeacc4-c8e2-400d-b386-9dc67537b513] Running
	I0407 12:14:42.985444 1170330 system_pods.go:89] "kube-apiserver-addons-660533" [ea9559bb-883a-440c-9f04-3987a9f7d9ff] Running
	I0407 12:14:42.985453 1170330 system_pods.go:89] "kube-controller-manager-addons-660533" [e2ac8df9-9bed-4f67-be00-94c038c8dace] Running
	I0407 12:14:42.985464 1170330 system_pods.go:89] "kube-ingress-dns-minikube" [63b69b2e-c0db-49af-9871-d106be2e08e8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0407 12:14:42.985473 1170330 system_pods.go:89] "kube-proxy-fz5dl" [aa0dbf0c-4444-4f25-a3c3-41368949c06d] Running
	I0407 12:14:42.985479 1170330 system_pods.go:89] "kube-scheduler-addons-660533" [c14c7a40-cdc1-4d28-ad70-8e89bb070e23] Running
	I0407 12:14:42.985490 1170330 system_pods.go:89] "metrics-server-7fbb699795-hrk7g" [eee024d4-c7d8-46c7-82e3-d5ad8e36eccb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0407 12:14:42.985501 1170330 system_pods.go:89] "nvidia-device-plugin-daemonset-rds5h" [b2520ae3-ad27-4503-9c27-7aff9b16771f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0407 12:14:42.985511 1170330 system_pods.go:89] "registry-6c88467877-cbcx8" [e8f82417-0cbb-4261-b17c-98dd81f33a21] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0407 12:14:42.985518 1170330 system_pods.go:89] "registry-proxy-jjr6j" [c51e7db8-05e9-4bf1-8b27-d01380c2388b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0407 12:14:42.985529 1170330 system_pods.go:89] "snapshot-controller-68b874b76f-724qh" [b9c50f5a-77e6-4e0a-bf6b-274f57699e6f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0407 12:14:42.985537 1170330 system_pods.go:89] "snapshot-controller-68b874b76f-vfx9j" [f8ea4adb-6b48-49a3-9b0a-30d3dbc78173] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0407 12:14:42.985545 1170330 system_pods.go:89] "storage-provisioner" [f649b84e-e785-4204-bb89-5dff296307a9] Running
	I0407 12:14:42.985558 1170330 system_pods.go:126] duration metric: took 31.287714ms to wait for k8s-apps to be running ...
	I0407 12:14:42.985572 1170330 system_svc.go:44] waiting for kubelet service to be running ....
	I0407 12:14:42.985638 1170330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 12:14:42.999573 1170330 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0407 12:14:42.999617 1170330 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0407 12:14:43.062129 1170330 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0407 12:14:43.062155 1170330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0407 12:14:43.096963 1170330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0407 12:14:43.129649 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:43.314226 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:43.397240 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:14:43.634422 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:43.798384 1170330 system_svc.go:56] duration metric: took 812.796142ms WaitForService to wait for kubelet
	I0407 12:14:43.798445 1170330 kubeadm.go:582] duration metric: took 12.370773074s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 12:14:43.798474 1170330 node_conditions.go:102] verifying NodePressure condition ...
	I0407 12:14:43.798403 1170330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.349873791s)
	I0407 12:14:43.798595 1170330 main.go:141] libmachine: Making call to close driver server
	I0407 12:14:43.798624 1170330 main.go:141] libmachine: (addons-660533) Calling .Close
	I0407 12:14:43.799267 1170330 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:14:43.799301 1170330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:14:43.799314 1170330 main.go:141] libmachine: Making call to close driver server
	I0407 12:14:43.799324 1170330 main.go:141] libmachine: (addons-660533) Calling .Close
	I0407 12:14:43.799339 1170330 main.go:141] libmachine: (addons-660533) DBG | Closing plugin on server side
	I0407 12:14:43.799626 1170330 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:14:43.799645 1170330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:14:43.802979 1170330 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 12:14:43.803029 1170330 node_conditions.go:123] node cpu capacity is 2
	I0407 12:14:43.803047 1170330 node_conditions.go:105] duration metric: took 4.565293ms to run NodePressure ...
	I0407 12:14:43.803064 1170330 start.go:241] waiting for startup goroutines ...
	I0407 12:14:43.812345 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:43.896629 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:14:44.129900 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:44.314577 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:44.441313 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:14:44.556359 1170330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.459342323s)
	I0407 12:14:44.556431 1170330 main.go:141] libmachine: Making call to close driver server
	I0407 12:14:44.556449 1170330 main.go:141] libmachine: (addons-660533) Calling .Close
	I0407 12:14:44.556949 1170330 main.go:141] libmachine: (addons-660533) DBG | Closing plugin on server side
	I0407 12:14:44.556979 1170330 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:14:44.556999 1170330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:14:44.557021 1170330 main.go:141] libmachine: Making call to close driver server
	I0407 12:14:44.557030 1170330 main.go:141] libmachine: (addons-660533) Calling .Close
	I0407 12:14:44.557338 1170330 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:14:44.557357 1170330 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:14:44.558639 1170330 addons.go:479] Verifying addon gcp-auth=true in "addons-660533"
	I0407 12:14:44.560828 1170330 out.go:177] * Verifying gcp-auth addon...
	I0407 12:14:44.563592 1170330 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0407 12:14:44.630245 1170330 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0407 12:14:44.630272 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:14:44.648073 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:44.814844 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:44.896124 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:14:45.067338 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:14:45.128852 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:45.313268 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:45.414192 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:14:45.567977 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:14:45.628316 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:45.821483 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:45.914244 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:14:46.067082 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:14:46.128480 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:46.313305 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:46.396429 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:14:46.568855 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:14:46.627865 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:46.814121 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:46.897600 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:14:47.067989 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:14:47.128399 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:47.313612 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:47.397812 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:14:47.567789 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:14:47.627838 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:47.813452 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:47.896544 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:14:48.067837 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:14:48.127928 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:48.313497 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:48.396869 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:14:48.566765 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:14:48.628061 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:48.813692 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:48.896682 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:14:49.067574 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:14:49.128061 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:49.312429 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:49.396891 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:14:49.567107 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:14:49.628686 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:49.812571 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:49.896861 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:14:50.066771 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:14:50.128171 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:50.313559 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:50.416112 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:14:50.570237 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:14:50.630156 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:50.813610 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:50.897067 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:14:51.067175 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:14:51.128736 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:51.317605 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:51.396942 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:14:51.568177 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:14:51.628097 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:51.812781 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:51.898208 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:14:52.068016 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:14:52.129625 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:52.312537 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:52.398806 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:14:52.567453 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:14:52.627836 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:52.812404 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:52.897286 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:14:53.076637 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:14:53.128523 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:53.313619 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:53.414762 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:14:53.568666 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:14:53.628583 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:53.813880 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:53.896517 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:14:54.067855 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:14:54.128679 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:54.312880 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:54.399594 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:14:54.568603 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:14:54.629412 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:54.813204 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:54.896830 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:14:55.066961 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:14:55.129537 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:55.313587 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:55.415006 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:14:55.568070 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:14:55.629382 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:55.813396 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:55.896820 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:14:56.067855 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:14:56.128802 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:56.313024 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:56.396052 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:14:56.568406 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:14:56.628834 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:56.812451 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:56.897185 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:14:57.068037 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:14:57.128741 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:57.313317 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:57.397507 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:14:57.567713 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:14:57.627907 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:57.812994 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:57.897296 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:14:58.067170 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:14:58.129022 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:58.313419 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:58.397029 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:14:58.568520 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:14:58.628402 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:58.814012 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:58.897582 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:14:59.067995 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:14:59.128130 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:59.312904 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:59.397494 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:14:59.568659 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:14:59.629340 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:14:59.813656 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:14:59.896948 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:00.067371 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:00.128297 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:00.314495 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:00.415142 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:00.568440 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:00.630822 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:00.813028 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:00.897617 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:01.068270 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:01.129639 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:01.313634 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:01.398769 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:01.570469 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:01.630386 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:01.812534 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:01.896909 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:02.068197 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:02.130008 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:02.313743 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:02.401492 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:02.568474 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:02.629388 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:02.813524 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:02.896346 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:03.067405 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:03.127576 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:03.312137 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:03.396815 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:03.567077 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:03.629245 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:03.814499 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:03.896891 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:04.066825 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:04.128277 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:04.313438 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:04.397804 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:04.568719 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:04.629929 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:04.814255 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:04.896566 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:05.068774 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:05.128363 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:05.313425 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:05.397671 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:05.567542 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:05.628769 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:05.813500 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:05.896791 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:06.066991 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:06.130563 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:06.314392 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:06.848721 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:06.848791 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:06.848872 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:06.849018 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:06.948768 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:07.067824 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:07.129392 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:07.313995 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:07.396119 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:07.567608 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:07.628560 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:07.814564 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:07.898593 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:08.069448 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:08.128285 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:08.313341 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:08.396899 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:08.572218 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:08.630723 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:08.814876 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:09.172904 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:09.173486 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:09.173571 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:09.312136 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:09.396661 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:09.567004 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:09.628743 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:09.814343 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:09.897621 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:10.067898 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:10.128208 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:10.312933 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:10.397285 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:10.567425 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:10.725764 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:10.812995 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:10.897960 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:11.066994 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:11.128276 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:11.577753 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:11.578033 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:11.578087 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:11.636817 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:11.814392 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:11.898754 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:12.067759 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:12.127927 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:12.312397 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:12.396653 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:12.568285 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:12.629790 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:12.812552 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:12.897213 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:13.067909 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:13.128164 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:13.312697 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:13.397634 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:13.568840 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:13.917987 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:13.918175 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:13.919240 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:14.067576 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:14.130991 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:14.312891 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:14.397505 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:14.567945 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:14.628197 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:14.813364 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:14.905748 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:15.068438 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:15.128668 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:15.312368 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:15.396558 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:15.569500 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:15.629490 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:15.814059 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:15.897244 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:16.067906 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:16.128219 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:16.313516 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:16.398194 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:16.570761 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:16.671638 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:16.813661 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:16.899505 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:17.069165 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:17.129031 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:17.312623 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:17.397345 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:17.573972 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:17.629455 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:17.813993 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:17.897394 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:18.069131 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:18.130478 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:18.313414 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:18.396981 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:18.567433 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:18.629230 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:18.813033 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:18.896928 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:19.067012 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:19.128226 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:19.312693 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:19.397875 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:19.569158 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:19.629750 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:19.813599 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:19.897186 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:20.068177 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:20.128896 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:20.313069 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:20.396755 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:20.567259 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:20.630567 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:20.814437 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:20.896679 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:21.068209 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:21.128623 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:21.313325 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:21.396968 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:21.569291 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:21.630354 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:21.813007 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:21.896772 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:22.079649 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:22.128229 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:22.312984 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:22.396247 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:22.568465 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:22.629032 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:22.813988 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:22.897048 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:23.067078 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:23.128415 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:23.312942 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:23.396333 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:23.571154 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:23.631131 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:15:23.815928 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:23.896778 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:24.067593 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:24.128566 1170330 kapi.go:107] duration metric: took 46.004291886s to wait for kubernetes.io/minikube-addons=registry ...
	I0407 12:15:24.312848 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:24.396467 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:24.567877 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:24.812425 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:24.896436 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:25.068058 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:25.317419 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:25.396498 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:25.567460 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:25.812320 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:25.896368 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:26.067433 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:26.313765 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:26.395905 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:26.568004 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:26.812687 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:26.897892 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:27.194988 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:27.312602 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:27.397294 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:27.568333 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:27.814335 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:27.897210 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:28.237688 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:28.312281 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:28.398329 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:28.568977 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:28.814072 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:28.897449 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:29.073840 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:29.313341 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:29.396862 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:29.567370 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:29.814967 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:29.896772 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:30.073904 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:30.313198 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:30.396775 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:30.568306 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:30.814243 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:30.897754 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:31.067865 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:31.313055 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:31.396615 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:31.567348 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:31.814210 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:31.896630 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:32.068659 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:32.313074 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:32.396903 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:32.569843 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:32.813587 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:32.899817 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:33.068440 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:33.313877 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:33.397241 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:33.567538 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:33.812613 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:33.898370 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:34.067959 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:34.530248 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:34.530591 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:34.567429 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:34.812835 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:34.897154 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:35.066727 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:35.313193 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:35.396976 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:35.567027 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:35.813674 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:35.897456 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:36.067739 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:36.312536 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:36.397010 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:36.568412 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:36.813121 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:36.896871 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:37.068353 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:37.313291 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:37.398030 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:37.794542 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:37.817833 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:37.896148 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:38.067370 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:38.313477 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:38.397697 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:38.567463 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:38.812504 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:38.897752 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:39.066689 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:39.315244 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:39.401486 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:39.568533 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:39.816690 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:39.897599 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:40.069058 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:40.314546 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:40.398022 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:40.590626 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:40.814833 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:40.917812 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:41.068088 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:41.325103 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:41.430775 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:41.568155 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:41.812882 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:41.897056 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:42.066943 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:42.312221 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:42.397141 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:42.567076 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:42.813346 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:42.897781 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:43.070353 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:43.314338 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:43.396884 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:43.567053 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:43.813323 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:43.897104 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:44.067178 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:44.313362 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:44.397742 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:44.567144 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:44.815151 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:44.897076 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:45.067594 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:45.312968 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:45.395846 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:45.567669 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:45.817557 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:45.896836 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:46.068294 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:46.339273 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:46.396391 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:46.567994 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:46.815265 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:46.920522 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:47.068344 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:47.312991 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:47.402816 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:47.566979 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:47.813441 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:47.897261 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:48.069293 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:48.316430 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:48.417727 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:48.576824 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:48.812482 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:48.897134 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:49.067002 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:49.312736 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:49.397105 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:49.568585 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:49.815878 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:49.902925 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:50.066882 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:50.312953 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:50.396669 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:50.567448 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:50.814199 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:50.897913 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:51.069010 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:51.312399 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:51.396786 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:51.567288 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:51.812785 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:51.898186 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:52.067135 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:52.312575 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:52.397041 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:52.567358 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:53.227625 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:53.238256 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:53.244657 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:53.329288 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:53.398431 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:53.568568 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:53.813684 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:53.900028 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:54.067579 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:54.311948 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:54.395925 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:54.566968 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:54.813030 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:54.896724 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:55.070559 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:55.544553 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:55.545383 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:55.574101 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:55.813836 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:55.898488 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:56.072144 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:56.313295 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:56.398802 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:56.570284 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:56.817564 1170330 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:15:56.918266 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:57.067751 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:57.327789 1170330 kapi.go:107] duration metric: took 1m16.019233836s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0407 12:15:57.429091 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:57.567896 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:57.900264 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:58.070942 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:58.397288 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:58.568554 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:58.896660 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:59.081031 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:59.397831 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:15:59.566466 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:15:59.903770 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:16:00.080574 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:00.396293 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:16:00.568684 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:00.898141 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:16:01.068825 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:01.397790 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:16:01.568266 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:01.896758 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:16:02.066820 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:02.396600 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:16:02.567983 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:02.897146 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:16:03.067407 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:03.401787 1170330 kapi.go:107] duration metric: took 1m20.50919874s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0407 12:16:03.568187 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:04.067652 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:04.568052 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:05.068542 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:05.567523 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:06.067734 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:06.569873 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:07.068384 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:07.567527 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:08.067867 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:08.567634 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:09.067857 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:09.568677 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:10.067160 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:10.568636 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:11.068254 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:11.569353 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:12.067199 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:12.567889 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:13.068388 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:13.567453 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:14.067829 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:14.567997 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:15.068229 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:15.568673 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:16.067517 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:16.568409 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:17.067789 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:17.567989 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:18.067815 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:18.567544 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:19.067822 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:19.567569 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:20.066859 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:20.568124 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:21.068311 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:21.568887 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:22.067455 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:22.567162 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:23.071771 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:23.568127 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:24.067703 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:24.568020 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:25.068024 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:25.568461 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:26.066730 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:26.566808 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:27.067404 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:27.567091 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:28.069789 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:28.567198 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:29.068307 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:29.567277 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:30.066731 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:30.567143 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:31.068323 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:31.567533 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:32.067818 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:32.567481 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:33.088285 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:33.567762 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:34.067398 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:34.567269 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:35.066630 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:35.567675 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:36.067004 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:36.567214 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:37.067744 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:37.567510 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:38.067067 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:38.568208 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:39.068789 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:39.567757 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:40.067372 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:40.566875 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:41.067534 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:41.567361 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:42.068276 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:42.566863 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:43.068929 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:43.567033 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:44.067781 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:44.567582 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:45.067438 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:45.567114 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:46.067582 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:46.566990 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:47.067358 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:47.567900 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:48.067195 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:48.567799 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:49.073057 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:49.568420 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:50.067367 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:50.567026 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:51.068245 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:51.568107 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:52.066951 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:52.568041 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:53.068033 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:53.568347 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:54.067703 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:54.567666 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:55.067464 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:55.567422 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:56.066875 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:56.567371 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:57.066913 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:57.567931 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:58.067342 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:58.566902 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:59.068317 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:16:59.567094 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:17:00.068177 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:17:00.567860 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:17:01.067656 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:17:01.567604 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:17:02.066970 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:17:02.567669 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:17:03.068141 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:17:03.568959 1170330 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:17:04.068592 1170330 kapi.go:107] duration metric: took 2m19.505000539s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0407 12:17:04.070900 1170330 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-660533 cluster.
	I0407 12:17:04.072809 1170330 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0407 12:17:04.074760 1170330 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0407 12:17:04.077136 1170330 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner, amd-gpu-device-plugin, nvidia-device-plugin, default-storageclass, storage-provisioner-rancher, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0407 12:17:04.079889 1170330 addons.go:514] duration metric: took 2m32.652103758s for enable addons: enabled=[cloud-spanner ingress-dns storage-provisioner amd-gpu-device-plugin nvidia-device-plugin default-storageclass storage-provisioner-rancher inspektor-gadget metrics-server yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0407 12:17:04.080014 1170330 start.go:246] waiting for cluster config update ...
	I0407 12:17:04.080059 1170330 start.go:255] writing updated cluster config ...
	I0407 12:17:04.080618 1170330 ssh_runner.go:195] Run: rm -f paused
	I0407 12:17:04.144604 1170330 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0407 12:17:04.147070 1170330 out.go:177] * Done! kubectl is now configured to use "addons-660533" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 07 12:20:15 addons-660533 crio[660]: time="2025-04-07 12:20:15.036783691Z" level=debug msg=" No signature storage configuration found for docker.io/kicbase/echo-server:1.0, using built-in default file:///var/lib/containers/sigstore" file="docker/registries_d.go:176"
	Apr 07 12:20:15 addons-660533 crio[660]: time="2025-04-07 12:20:15.036822354Z" level=debug msg="Looking for TLS certificates and private keys in /etc/docker/certs.d/docker.io" file="tlsclientconfig/tlsclientconfig.go:20"
	Apr 07 12:20:15 addons-660533 crio[660]: time="2025-04-07 12:20:15.036862965Z" level=debug msg="GET https://registry-1.docker.io/v2/" file="docker/docker_client.go:631"
	Apr 07 12:20:15 addons-660533 crio[660]: time="2025-04-07 12:20:15.037645745Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=24a828d0-1ff9-41f3-bfe0-88397ee150f6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 12:20:15 addons-660533 crio[660]: time="2025-04-07 12:20:15.037721497Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=24a828d0-1ff9-41f3-bfe0-88397ee150f6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 12:20:15 addons-660533 crio[660]: time="2025-04-07 12:20:15.038066528Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8e40fbc1a180b199e032a2a3ea62e867a097b9104371ad33662c645f808d3d85,PodSandboxId:f1e8706c314720180025f5eb41deabf1f2e35e1899459dabde29780952b9067a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744028277424425314,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 721b8242-08d2-4e4b-b477-33911134cbdd,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:586a708342de3a9b8566bc17ae741547bda60e674da577bebca71154be652356,PodSandboxId:07081ca4a6fbe68536e792a4e79435f44c3840092712e8137b8e45f6b7586b7e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1744028229229257310,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e3da275a-a9b2-4918-9b3b-461b094d63cd,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e61f84e9cd21db55d2d80227dd59debcdae9ac317900bf8b66d12bee35edf1,PodSandboxId:47b69a01222a5b8d134b2ebe2100f5fafda3700a308fb20a073db484bb2d7c98,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1744028156437429660,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-2hzvh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4f8ab5f1-4d85-4ab6-b582-f61836a81f5d,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:1d5ad058091880567c39a9e9ef1063cabb2faf365413ae4d9d451e074f85a78d,PodSandboxId:091628a977e2a720a9549bf0982e5fb3dabec1ed6d283b6b29553a0d433b0dc4,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744028146754613069,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-788g7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b9e1ed65-7d50-494a-b117-27798ca8cbcf,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fbaef3054f02cd308fb430b900d29b48f3a8559d1c2eebc650c9a60628ee8b5,PodSandboxId:89995ae0072b6f818ba1010639d1f65fed079fdfd6a673ca056e13b891a619a0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744028143000629702,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qmb8v,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8fbaed5d-c704-409b-9f5f-4cdf3be4dcd5,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddda8761297f0846dcc4ea692a175cfc8edf453eba22f26eb842985a734d77bf,PodSandboxId:3c4f276fc921212399b70242137eca1db918d0c32cfcf7e200c7c68b0c89f6e2,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1744028093287965819,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-vhjpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f03dee-8b86-4818-b32a-e5a77c247e53,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a5b7cddbfe417008a83b8b7e0dd9b9facad9d888f3ee9369e71f237caeff072,PodSandboxId:84867c33648c5e3fdbe1fe6f86ca5ae254b14ac3e725e19541c19148d0301725,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1744028090351655303,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63b69b2e-c0db-49af-9871-d106be2e08e8,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a51d1d5086ed0e6633c86ce82155ae85195517f7200964c19422b9744db9ff9,PodSandboxId:8cb33e72248287c460b42de561c80550ca4efdbf6d4841cdb0172dc3985828c7,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744028079508520872,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f649b84e-e785-4204-bb89-5dff296307a9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f61c46ae6e7de1d5135fc20c9bf6b6fe0593deac455c735b2b3f322f09e0355b,PodSandboxId:0f3e6654dc2334cb256d2ce97a014d0bca25928390d6ed77c31a47d2392300f7,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744028076427813826,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-cnkpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aef46db8-e8ba-4d7b-8aa1-0e4fa9e424b9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:36009d84dfaa2e7bfce45b75cffbd14623fa35a7f503842c2b8e1e61e0ba7bb5,PodSandboxId:589bd439af874dd8eaee7e02a83ccc1dc6f6b209b7bc65b1509b4e41a344e43a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744028072706095297,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fz5dl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa0dbf0c-4444-4f25-a3c3-41368949c06d,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70a9b845e851be748447e3bb94063
e69fc820825104c529c73441f9d70d700e3,PodSandboxId:ffc9a342e30c7775bc4c7a88986449285f9c749a864964816f4caa366d67143e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744028060580590767,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-660533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a489e371931ea52fdff809a8f9f44622,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4088afaa05116fc506e5bcf70a9e01c9215d43501104cb7db11b617496b976d3,PodSandboxI
d:bb3f2b9aae0762c216f1145edbd1cfa92aaee01ed70b4657024ecdd304a0b183,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744028060587267919,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-660533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 560b757b0d35bc661609e4a150ebb604,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dc48e7e75c9e801e97c86226392b2d7bc20f04dd75ce60a9838365a3bcd2563,PodSandboxId:071fba9879ec4a5
62a23f8b2fc8df010dddec1057d128dad80ffe0df6314e959,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744028060516807563,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-660533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e4efc58b4e40c3b4c033ab94b4d0692,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39cf766345f56567cf4dbeec431a600fed773a53c54f951dc4ed85849594b419,PodSandboxId:aa7b1
ea8584fcb77106b65c1854ce24543599b225fa96ac9783b0a6173ef2a54,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744028060527892309,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-660533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beec334109a198f2a71849603d420f79,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=24a828d0-1ff9-41f3-bfe0-88397ee150f6 name=/runtime.v1.RuntimeServ
ice/ListContainers
	Apr 07 12:20:15 addons-660533 crio[660]: time="2025-04-07 12:20:15.075946842Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1ed56e35-dc46-4164-b214-acfde922da4f name=/runtime.v1.RuntimeService/Version
	Apr 07 12:20:15 addons-660533 crio[660]: time="2025-04-07 12:20:15.076031060Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1ed56e35-dc46-4164-b214-acfde922da4f name=/runtime.v1.RuntimeService/Version
	Apr 07 12:20:15 addons-660533 crio[660]: time="2025-04-07 12:20:15.077546854Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=59cc1853-b233-4b7a-9a4c-34e09123a40a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 12:20:15 addons-660533 crio[660]: time="2025-04-07 12:20:15.078754056Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744028415078723679,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=59cc1853-b233-4b7a-9a4c-34e09123a40a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 12:20:15 addons-660533 crio[660]: time="2025-04-07 12:20:15.079524881Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=82e38c9b-cf2a-4129-b0a9-7bb0821dde48 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 12:20:15 addons-660533 crio[660]: time="2025-04-07 12:20:15.079596797Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=82e38c9b-cf2a-4129-b0a9-7bb0821dde48 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 12:20:15 addons-660533 crio[660]: time="2025-04-07 12:20:15.079893042Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8e40fbc1a180b199e032a2a3ea62e867a097b9104371ad33662c645f808d3d85,PodSandboxId:f1e8706c314720180025f5eb41deabf1f2e35e1899459dabde29780952b9067a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744028277424425314,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 721b8242-08d2-4e4b-b477-33911134cbdd,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:586a708342de3a9b8566bc17ae741547bda60e674da577bebca71154be652356,PodSandboxId:07081ca4a6fbe68536e792a4e79435f44c3840092712e8137b8e45f6b7586b7e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1744028229229257310,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e3da275a-a9b2-4918-9b3b-461b094d63cd,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e61f84e9cd21db55d2d80227dd59debcdae9ac317900bf8b66d12bee35edf1,PodSandboxId:47b69a01222a5b8d134b2ebe2100f5fafda3700a308fb20a073db484bb2d7c98,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1744028156437429660,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-2hzvh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4f8ab5f1-4d85-4ab6-b582-f61836a81f5d,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:1d5ad058091880567c39a9e9ef1063cabb2faf365413ae4d9d451e074f85a78d,PodSandboxId:091628a977e2a720a9549bf0982e5fb3dabec1ed6d283b6b29553a0d433b0dc4,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744028146754613069,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-788g7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b9e1ed65-7d50-494a-b117-27798ca8cbcf,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fbaef3054f02cd308fb430b900d29b48f3a8559d1c2eebc650c9a60628ee8b5,PodSandboxId:89995ae0072b6f818ba1010639d1f65fed079fdfd6a673ca056e13b891a619a0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744028143000629702,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qmb8v,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8fbaed5d-c704-409b-9f5f-4cdf3be4dcd5,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddda8761297f0846dcc4ea692a175cfc8edf453eba22f26eb842985a734d77bf,PodSandboxId:3c4f276fc921212399b70242137eca1db918d0c32cfcf7e200c7c68b0c89f6e2,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1744028093287965819,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-vhjpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f03dee-8b86-4818-b32a-e5a77c247e53,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a5b7cddbfe417008a83b8b7e0dd9b9facad9d888f3ee9369e71f237caeff072,PodSandboxId:84867c33648c5e3fdbe1fe6f86ca5ae254b14ac3e725e19541c19148d0301725,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1744028090351655303,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63b69b2e-c0db-49af-9871-d106be2e08e8,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a51d1d5086ed0e6633c86ce82155ae85195517f7200964c19422b9744db9ff9,PodSandboxId:8cb33e72248287c460b42de561c80550ca4efdbf6d4841cdb0172dc3985828c7,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744028079508520872,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f649b84e-e785-4204-bb89-5dff296307a9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f61c46ae6e7de1d5135fc20c9bf6b6fe0593deac455c735b2b3f322f09e0355b,PodSandboxId:0f3e6654dc2334cb256d2ce97a014d0bca25928390d6ed77c31a47d2392300f7,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744028076427813826,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-cnkpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aef46db8-e8ba-4d7b-8aa1-0e4fa9e424b9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:36009d84dfaa2e7bfce45b75cffbd14623fa35a7f503842c2b8e1e61e0ba7bb5,PodSandboxId:589bd439af874dd8eaee7e02a83ccc1dc6f6b209b7bc65b1509b4e41a344e43a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744028072706095297,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fz5dl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa0dbf0c-4444-4f25-a3c3-41368949c06d,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70a9b845e851be748447e3bb94063
e69fc820825104c529c73441f9d70d700e3,PodSandboxId:ffc9a342e30c7775bc4c7a88986449285f9c749a864964816f4caa366d67143e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744028060580590767,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-660533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a489e371931ea52fdff809a8f9f44622,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4088afaa05116fc506e5bcf70a9e01c9215d43501104cb7db11b617496b976d3,PodSandboxI
d:bb3f2b9aae0762c216f1145edbd1cfa92aaee01ed70b4657024ecdd304a0b183,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744028060587267919,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-660533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 560b757b0d35bc661609e4a150ebb604,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dc48e7e75c9e801e97c86226392b2d7bc20f04dd75ce60a9838365a3bcd2563,PodSandboxId:071fba9879ec4a5
62a23f8b2fc8df010dddec1057d128dad80ffe0df6314e959,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744028060516807563,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-660533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e4efc58b4e40c3b4c033ab94b4d0692,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39cf766345f56567cf4dbeec431a600fed773a53c54f951dc4ed85849594b419,PodSandboxId:aa7b1
ea8584fcb77106b65c1854ce24543599b225fa96ac9783b0a6173ef2a54,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744028060527892309,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-660533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beec334109a198f2a71849603d420f79,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=82e38c9b-cf2a-4129-b0a9-7bb0821dde48 name=/runtime.v1.RuntimeServ
ice/ListContainers
	Apr 07 12:20:15 addons-660533 crio[660]: time="2025-04-07 12:20:15.094467540Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=8f232b41-c15e-46d0-b080-7ad0d1018c0a name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 07 12:20:15 addons-660533 crio[660]: time="2025-04-07 12:20:15.094825886Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:dec97cc62bf319f3247662a96c383934d2d9553876cda6fffb1ba1ab3607f205,Metadata:&PodSandboxMetadata{Name:hello-world-app-7d9564db4-9cvqm,Uid:ee2753d1-8149-4065-9c15-a3dd1d83228b,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744028413948168581,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-7d9564db4-9cvqm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ee2753d1-8149-4065-9c15-a3dd1d83228b,pod-template-hash: 7d9564db4,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-07T12:20:13.630696794Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f1e8706c314720180025f5eb41deabf1f2e35e1899459dabde29780952b9067a,Metadata:&PodSandboxMetadata{Name:nginx,Uid:721b8242-08d2-4e4b-b477-33911134cbdd,Namespace:default,Attempt:0,},St
ate:SANDBOX_READY,CreatedAt:1744028273061916191,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 721b8242-08d2-4e4b-b477-33911134cbdd,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-07T12:17:52.747579480Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:07081ca4a6fbe68536e792a4e79435f44c3840092712e8137b8e45f6b7586b7e,Metadata:&PodSandboxMetadata{Name:busybox,Uid:e3da275a-a9b2-4918-9b3b-461b094d63cd,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744028225142182874,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e3da275a-a9b2-4918-9b3b-461b094d63cd,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-07T12:17:04.832153729Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:47b69a01222a5b8d134b2
ebe2100f5fafda3700a308fb20a073db484bb2d7c98,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-56d7c84fd4-2hzvh,Uid:4f8ab5f1-4d85-4ab6-b582-f61836a81f5d,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744028145255682048,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-2hzvh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4f8ab5f1-4d85-4ab6-b582-f61836a81f5d,pod-template-hash: 56d7c84fd4,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-07T12:14:41.043601774Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:091628a977e2a720a9549bf0982e5fb3dabec1ed6d283b6b29553a0d433b0dc4,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-788g7,Uid:b9e1ed65-7d50-494a-b117-27798ca8cbcf,Namespace:ingress-nginx,Attempt:0,},St
ate:SANDBOX_NOTREADY,CreatedAt:1744028083635583098,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: 129eec9c-3b70-4789-93e3-89c716b166f5,batch.kubernetes.io/job-name: ingress-nginx-admission-patch,controller-uid: 129eec9c-3b70-4789-93e3-89c716b166f5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-788g7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b9e1ed65-7d50-494a-b117-27798ca8cbcf,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-07T12:14:41.315277790Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:89995ae0072b6f818ba1010639d1f65fed079fdfd6a673ca056e13b891a619a0,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-qmb8v,Uid:8fbaed5d-c704-409b-9f5f-4cdf3be4dcd5,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,Crea
tedAt:1744028081538851907,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: 2ff7d981-323a-410c-8399-4a2f0ed1d083,batch.kubernetes.io/job-name: ingress-nginx-admission-create,controller-uid: 2ff7d981-323a-410c-8399-4a2f0ed1d083,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-create-qmb8v,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8fbaed5d-c704-409b-9f5f-4cdf3be4dcd5,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-07T12:14:41.144965653Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8cb33e72248287c460b42de561c80550ca4efdbf6d4841cdb0172dc3985828c7,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:f649b84e-e785-4204-bb89-5dff296307a9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744028077486853649,Labels:map[string]
string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f649b84e-e785-4204-bb89-5dff296307a9,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io
/config.seen: 2025-04-07T12:14:36.871233627Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:84867c33648c5e3fdbe1fe6f86ca5ae254b14ac3e725e19541c19148d0301725,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:63b69b2e-c0db-49af-9871-d106be2e08e8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744028076949913975,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63b69b2e-c0db-49af-9871-d106be2e08e8,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":
\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"protocol\":\"UDP\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\"}}\n,kubernetes.io/config.seen: 2025-04-07T12:14:36.333867678Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3c4f276fc921212399b70242137eca1db918d0c32cfcf7e200c7c68b0c89f6e2,Metadata:&PodSandboxMetadata{Name:amd-gpu-device-plugin-vhjpx,Uid:c3f03dee-8b86-4818-b32a-e5a77c247e53,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744028074935789136,Labels:map[string]string{controller-revision-hash: 578b4c597,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-vhjpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f03de
e-8b86-4818-b32a-e5a77c247e53,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-07T12:14:34.604010807Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:589bd439af874dd8eaee7e02a83ccc1dc6f6b209b7bc65b1509b4e41a344e43a,Metadata:&PodSandboxMetadata{Name:kube-proxy-fz5dl,Uid:aa0dbf0c-4444-4f25-a3c3-41368949c06d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744028072172761202,Labels:map[string]string{controller-revision-hash: 7bb84c4984,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-fz5dl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa0dbf0c-4444-4f25-a3c3-41368949c06d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-07T12:14:31.264686308Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0f3e6654dc2334cb256d2ce97a014d0bca25928390d6ed77c31a47d2392300f7,M
etadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-cnkpx,Uid:aef46db8-e8ba-4d7b-8aa1-0e4fa9e424b9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744028072061711951,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-cnkpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aef46db8-e8ba-4d7b-8aa1-0e4fa9e424b9,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-07T12:14:31.746180548Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:aa7b1ea8584fcb77106b65c1854ce24543599b225fa96ac9783b0a6173ef2a54,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-660533,Uid:beec334109a198f2a71849603d420f79,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744028060325692944,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-660533,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: beec334109a198f2a71849603d420f79,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.112:8443,kubernetes.io/config.hash: beec334109a198f2a71849603d420f79,kubernetes.io/config.seen: 2025-04-07T12:14:19.847237939Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ffc9a342e30c7775bc4c7a88986449285f9c749a864964816f4caa366d67143e,Metadata:&PodSandboxMetadata{Name:etcd-addons-660533,Uid:a489e371931ea52fdff809a8f9f44622,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744028060320515679,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-660533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a489e371931ea52fdff809a8f9f44622,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.112:2379,kubernetes.io/config.hash: a489e371931ea52fdff809a8f9f4462
2,kubernetes.io/config.seen: 2025-04-07T12:14:19.847236706Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bb3f2b9aae0762c216f1145edbd1cfa92aaee01ed70b4657024ecdd304a0b183,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-660533,Uid:560b757b0d35bc661609e4a150ebb604,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744028060313497150,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-660533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 560b757b0d35bc661609e4a150ebb604,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 560b757b0d35bc661609e4a150ebb604,kubernetes.io/config.seen: 2025-04-07T12:14:19.847235354Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:071fba9879ec4a562a23f8b2fc8df010dddec1057d128dad80ffe0df6314e959,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-660533,Uid:3e4efc58b4e40c3b4c033ab94b4d069
2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744028060310780583,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-660533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e4efc58b4e40c3b4c033ab94b4d0692,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3e4efc58b4e40c3b4c033ab94b4d0692,kubernetes.io/config.seen: 2025-04-07T12:14:19.847230431Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=8f232b41-c15e-46d0-b080-7ad0d1018c0a name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 07 12:20:15 addons-660533 crio[660]: time="2025-04-07 12:20:15.095737505Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bda53fd7-083e-4b14-bf3c-c75c774eb3c3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 12:20:15 addons-660533 crio[660]: time="2025-04-07 12:20:15.095821925Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bda53fd7-083e-4b14-bf3c-c75c774eb3c3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 12:20:15 addons-660533 crio[660]: time="2025-04-07 12:20:15.096149264Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8e40fbc1a180b199e032a2a3ea62e867a097b9104371ad33662c645f808d3d85,PodSandboxId:f1e8706c314720180025f5eb41deabf1f2e35e1899459dabde29780952b9067a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744028277424425314,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 721b8242-08d2-4e4b-b477-33911134cbdd,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:586a708342de3a9b8566bc17ae741547bda60e674da577bebca71154be652356,PodSandboxId:07081ca4a6fbe68536e792a4e79435f44c3840092712e8137b8e45f6b7586b7e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1744028229229257310,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e3da275a-a9b2-4918-9b3b-461b094d63cd,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e61f84e9cd21db55d2d80227dd59debcdae9ac317900bf8b66d12bee35edf1,PodSandboxId:47b69a01222a5b8d134b2ebe2100f5fafda3700a308fb20a073db484bb2d7c98,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1744028156437429660,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-2hzvh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4f8ab5f1-4d85-4ab6-b582-f61836a81f5d,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:1d5ad058091880567c39a9e9ef1063cabb2faf365413ae4d9d451e074f85a78d,PodSandboxId:091628a977e2a720a9549bf0982e5fb3dabec1ed6d283b6b29553a0d433b0dc4,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744028146754613069,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-788g7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b9e1ed65-7d50-494a-b117-27798ca8cbcf,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fbaef3054f02cd308fb430b900d29b48f3a8559d1c2eebc650c9a60628ee8b5,PodSandboxId:89995ae0072b6f818ba1010639d1f65fed079fdfd6a673ca056e13b891a619a0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744028143000629702,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qmb8v,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8fbaed5d-c704-409b-9f5f-4cdf3be4dcd5,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddda8761297f0846dcc4ea692a175cfc8edf453eba22f26eb842985a734d77bf,PodSandboxId:3c4f276fc921212399b70242137eca1db918d0c32cfcf7e200c7c68b0c89f6e2,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1744028093287965819,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-vhjpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f03dee-8b86-4818-b32a-e5a77c247e53,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a5b7cddbfe417008a83b8b7e0dd9b9facad9d888f3ee9369e71f237caeff072,PodSandboxId:84867c33648c5e3fdbe1fe6f86ca5ae254b14ac3e725e19541c19148d0301725,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1744028090351655303,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63b69b2e-c0db-49af-9871-d106be2e08e8,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a51d1d5086ed0e6633c86ce82155ae85195517f7200964c19422b9744db9ff9,PodSandboxId:8cb33e72248287c460b42de561c80550ca4efdbf6d4841cdb0172dc3985828c7,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744028079508520872,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f649b84e-e785-4204-bb89-5dff296307a9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f61c46ae6e7de1d5135fc20c9bf6b6fe0593deac455c735b2b3f322f09e0355b,PodSandboxId:0f3e6654dc2334cb256d2ce97a014d0bca25928390d6ed77c31a47d2392300f7,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744028076427813826,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-cnkpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aef46db8-e8ba-4d7b-8aa1-0e4fa9e424b9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:36009d84dfaa2e7bfce45b75cffbd14623fa35a7f503842c2b8e1e61e0ba7bb5,PodSandboxId:589bd439af874dd8eaee7e02a83ccc1dc6f6b209b7bc65b1509b4e41a344e43a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744028072706095297,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fz5dl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa0dbf0c-4444-4f25-a3c3-41368949c06d,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70a9b845e851be748447e3bb94063
e69fc820825104c529c73441f9d70d700e3,PodSandboxId:ffc9a342e30c7775bc4c7a88986449285f9c749a864964816f4caa366d67143e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744028060580590767,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-660533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a489e371931ea52fdff809a8f9f44622,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4088afaa05116fc506e5bcf70a9e01c9215d43501104cb7db11b617496b976d3,PodSandboxI
d:bb3f2b9aae0762c216f1145edbd1cfa92aaee01ed70b4657024ecdd304a0b183,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744028060587267919,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-660533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 560b757b0d35bc661609e4a150ebb604,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dc48e7e75c9e801e97c86226392b2d7bc20f04dd75ce60a9838365a3bcd2563,PodSandboxId:071fba9879ec4a5
62a23f8b2fc8df010dddec1057d128dad80ffe0df6314e959,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744028060516807563,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-660533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e4efc58b4e40c3b4c033ab94b4d0692,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39cf766345f56567cf4dbeec431a600fed773a53c54f951dc4ed85849594b419,PodSandboxId:aa7b1
ea8584fcb77106b65c1854ce24543599b225fa96ac9783b0a6173ef2a54,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744028060527892309,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-660533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beec334109a198f2a71849603d420f79,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bda53fd7-083e-4b14-bf3c-c75c774eb3c3 name=/runtime.v1.RuntimeServ
ice/ListContainers
	Apr 07 12:20:15 addons-660533 crio[660]: time="2025-04-07 12:20:15.097231234Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:nil,LabelSelector:map[string]string{io.kubernetes.pod.uid: ee2753d1-8149-4065-9c15-a3dd1d83228b,},},}" file="otel-collector/interceptors.go:62" id=65f780fe-62b9-4837-a2d8-b9fd9b7353b1 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 07 12:20:15 addons-660533 crio[660]: time="2025-04-07 12:20:15.097397584Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:dec97cc62bf319f3247662a96c383934d2d9553876cda6fffb1ba1ab3607f205,Metadata:&PodSandboxMetadata{Name:hello-world-app-7d9564db4-9cvqm,Uid:ee2753d1-8149-4065-9c15-a3dd1d83228b,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744028413948168581,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-7d9564db4-9cvqm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ee2753d1-8149-4065-9c15-a3dd1d83228b,pod-template-hash: 7d9564db4,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-07T12:20:13.630696794Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=65f780fe-62b9-4837-a2d8-b9fd9b7353b1 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 07 12:20:15 addons-660533 crio[660]: time="2025-04-07 12:20:15.097830397Z" level=debug msg="Request: &PodSandboxStatusRequest{PodSandboxId:dec97cc62bf319f3247662a96c383934d2d9553876cda6fffb1ba1ab3607f205,Verbose:false,}" file="otel-collector/interceptors.go:62" id=ea050f34-4a44-43ac-8f87-52a4b1ba1194 name=/runtime.v1.RuntimeService/PodSandboxStatus
	Apr 07 12:20:15 addons-660533 crio[660]: time="2025-04-07 12:20:15.098024467Z" level=debug msg="Response: &PodSandboxStatusResponse{Status:&PodSandboxStatus{Id:dec97cc62bf319f3247662a96c383934d2d9553876cda6fffb1ba1ab3607f205,Metadata:&PodSandboxMetadata{Name:hello-world-app-7d9564db4-9cvqm,Uid:ee2753d1-8149-4065-9c15-a3dd1d83228b,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1744028413948168581,Network:&PodSandboxNetworkStatus{Ip:10.244.0.33,AdditionalIps:[]*PodIP{},},Linux:&LinuxPodSandboxStatus{Namespaces:&Namespace{Options:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOptions:nil,},},},Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-7d9564db4-9cvqm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ee2753d1-8149-4065-9c15-a3dd1d83228b,pod-template-hash: 7d9564db4,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-07T12:20:13.630696794Z,kubernetes.io/config.source: api
,},RuntimeHandler:,},Info:map[string]string{},ContainersStatuses:[]*ContainerStatus{},Timestamp:0,}" file="otel-collector/interceptors.go:74" id=ea050f34-4a44-43ac-8f87-52a4b1ba1194 name=/runtime.v1.RuntimeService/PodSandboxStatus
	Apr 07 12:20:15 addons-660533 crio[660]: time="2025-04-07 12:20:15.098431279Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: ee2753d1-8149-4065-9c15-a3dd1d83228b,},},}" file="otel-collector/interceptors.go:62" id=b3f37c32-976a-4431-aa5c-4cdc4021a240 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 12:20:15 addons-660533 crio[660]: time="2025-04-07 12:20:15.098506567Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b3f37c32-976a-4431-aa5c-4cdc4021a240 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 12:20:15 addons-660533 crio[660]: time="2025-04-07 12:20:15.098572301Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b3f37c32-976a-4431-aa5c-4cdc4021a240 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8e40fbc1a180b       docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591                              2 minutes ago       Running             nginx                     0                   f1e8706c31472       nginx
	586a708342de3       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   07081ca4a6fbe       busybox
	34e61f84e9cd2       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             4 minutes ago       Running             controller                0                   47b69a01222a5       ingress-nginx-controller-56d7c84fd4-2hzvh
	1d5ad05809188       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   4 minutes ago       Exited              patch                     0                   091628a977e2a       ingress-nginx-admission-patch-788g7
	9fbaef3054f02       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   4 minutes ago       Exited              create                    0                   89995ae0072b6       ingress-nginx-admission-create-qmb8v
	ddda8761297f0       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     5 minutes ago       Running             amd-gpu-device-plugin     0                   3c4f276fc9212       amd-gpu-device-plugin-vhjpx
	9a5b7cddbfe41       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             5 minutes ago       Running             minikube-ingress-dns      0                   84867c33648c5       kube-ingress-dns-minikube
	3a51d1d5086ed       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   8cb33e7224828       storage-provisioner
	f61c46ae6e7de       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             5 minutes ago       Running             coredns                   0                   0f3e6654dc233       coredns-668d6bf9bc-cnkpx
	36009d84dfaa2       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5                                                             5 minutes ago       Running             kube-proxy                0                   589bd439af874       kube-proxy-fz5dl
	4088afaa05116       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d                                                             5 minutes ago       Running             kube-scheduler            0                   bb3f2b9aae076       kube-scheduler-addons-660533
	70a9b845e851b       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                             5 minutes ago       Running             etcd                      0                   ffc9a342e30c7       etcd-addons-660533
	39cf766345f56       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef                                                             5 minutes ago       Running             kube-apiserver            0                   aa7b1ea8584fc       kube-apiserver-addons-660533
	7dc48e7e75c9e       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389                                                             5 minutes ago       Running             kube-controller-manager   0                   071fba9879ec4       kube-controller-manager-addons-660533
	
	
	==> coredns [f61c46ae6e7de1d5135fc20c9bf6b6fe0593deac455c735b2b3f322f09e0355b] <==
	[INFO] 10.244.0.8:36636 - 57224 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000292653s
	[INFO] 10.244.0.8:36636 - 43219 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000400537s
	[INFO] 10.244.0.8:36636 - 12757 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000169937s
	[INFO] 10.244.0.8:36636 - 51794 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000156774s
	[INFO] 10.244.0.8:36636 - 25651 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000360861s
	[INFO] 10.244.0.8:36636 - 51087 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000184022s
	[INFO] 10.244.0.8:36636 - 45483 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000245221s
	[INFO] 10.244.0.8:38841 - 8909 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00020226s
	[INFO] 10.244.0.8:38841 - 8660 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000485298s
	[INFO] 10.244.0.8:43763 - 18359 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000233582s
	[INFO] 10.244.0.8:43763 - 18618 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000425161s
	[INFO] 10.244.0.8:36476 - 57643 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000134416s
	[INFO] 10.244.0.8:36476 - 57858 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000123143s
	[INFO] 10.244.0.8:38915 - 2747 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000189604s
	[INFO] 10.244.0.8:38915 - 2968 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000125092s
	[INFO] 10.244.0.23:55302 - 60611 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000293717s
	[INFO] 10.244.0.23:34778 - 56589 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000138165s
	[INFO] 10.244.0.23:58301 - 3432 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000157265s
	[INFO] 10.244.0.23:36885 - 22521 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000218284s
	[INFO] 10.244.0.23:51941 - 56634 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000352214s
	[INFO] 10.244.0.23:59050 - 64696 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000166718s
	[INFO] 10.244.0.23:55166 - 8005 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.005434822s
	[INFO] 10.244.0.23:34390 - 22936 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00633681s
	[INFO] 10.244.0.26:52193 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000312236s
	[INFO] 10.244.0.26:56279 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000194558s
	
	
	==> describe nodes <==
	Name:               addons-660533
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-660533
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=33e6edc58d2014d70e908473920ef4ac8eae1e43
	                    minikube.k8s.io/name=addons-660533
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_07T12_14_26_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-660533
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Apr 2025 12:14:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-660533
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Apr 2025 12:20:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Apr 2025 12:18:32 +0000   Mon, 07 Apr 2025 12:14:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Apr 2025 12:18:32 +0000   Mon, 07 Apr 2025 12:14:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Apr 2025 12:18:32 +0000   Mon, 07 Apr 2025 12:14:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Apr 2025 12:18:32 +0000   Mon, 07 Apr 2025 12:14:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.112
	  Hostname:    addons-660533
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 fd791e70fc7b4b0284c882adfa5dc558
	  System UUID:                fd791e70-fc7b-4b02-84c8-82adfa5dc558
	  Boot ID:                    48ac76dc-a77c-4bf0-9c9f-f34ba765f139
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m11s
	  default                     hello-world-app-7d9564db4-9cvqm              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-2hzvh    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m35s
	  kube-system                 amd-gpu-device-plugin-vhjpx                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m41s
	  kube-system                 coredns-668d6bf9bc-cnkpx                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m44s
	  kube-system                 etcd-addons-660533                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m49s
	  kube-system                 kube-apiserver-addons-660533                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m49s
	  kube-system                 kube-controller-manager-addons-660533        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m49s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m39s
	  kube-system                 kube-proxy-fz5dl                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m44s
	  kube-system                 kube-scheduler-addons-660533                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m49s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m41s  kube-proxy       
	  Normal  Starting                 5m56s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m56s  kubelet          Node addons-660533 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m56s  kubelet          Node addons-660533 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 5m50s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m49s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m49s  kubelet          Node addons-660533 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m49s  kubelet          Node addons-660533 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m49s  kubelet          Node addons-660533 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m49s  kubelet          Node addons-660533 status is now: NodeReady
	  Normal  RegisteredNode           5m45s  node-controller  Node addons-660533 event: Registered Node addons-660533 in Controller
	
	
	==> dmesg <==
	[  +5.857205] systemd-fstab-generator[1360]: Ignoring "noauto" option for root device
	[  +0.182499] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.045988] kauditd_printk_skb: 101 callbacks suppressed
	[  +5.118214] kauditd_printk_skb: 138 callbacks suppressed
	[  +8.453137] kauditd_printk_skb: 96 callbacks suppressed
	[Apr 7 12:15] kauditd_printk_skb: 4 callbacks suppressed
	[  +9.586975] kauditd_printk_skb: 2 callbacks suppressed
	[  +9.594072] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.717850] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.124572] kauditd_printk_skb: 47 callbacks suppressed
	[  +7.032550] kauditd_printk_skb: 13 callbacks suppressed
	[Apr 7 12:16] kauditd_printk_skb: 16 callbacks suppressed
	[Apr 7 12:17] kauditd_printk_skb: 9 callbacks suppressed
	[ +14.349285] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.447601] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.326797] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.770626] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.703734] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.359143] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.022360] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.141196] kauditd_printk_skb: 41 callbacks suppressed
	[Apr 7 12:18] kauditd_printk_skb: 12 callbacks suppressed
	[ +14.826812] kauditd_printk_skb: 8 callbacks suppressed
	[  +7.884273] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.666542] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [70a9b845e851be748447e3bb94063e69fc820825104c529c73441f9d70d700e3] <==
	{"level":"warn","ts":"2025-04-07T12:15:53.211516Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"409.587959ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-07T12:15:53.211595Z","caller":"traceutil/trace.go:171","msg":"trace[802919933] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1057; }","duration":"409.710653ms","start":"2025-04-07T12:15:52.801871Z","end":"2025-04-07T12:15:53.211581Z","steps":["trace[802919933] 'range keys from in-memory index tree'  (duration: 409.510609ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T12:15:53.211624Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-07T12:15:52.801851Z","time spent":"409.766469ms","remote":"127.0.0.1:37470","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-04-07T12:15:53.211723Z","caller":"traceutil/trace.go:171","msg":"trace[120473616] transaction","detail":"{read_only:false; response_revision:1058; number_of_response:1; }","duration":"384.544663ms","start":"2025-04-07T12:15:52.827162Z","end":"2025-04-07T12:15:53.211706Z","steps":["trace[120473616] 'process raft request'  (duration: 380.452829ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T12:15:53.211864Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-07T12:15:52.827130Z","time spent":"384.675639ms","remote":"127.0.0.1:37512","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3132,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/jobs/gcp-auth/gcp-auth-certs-patch\" mod_revision:825 > success:<request_put:<key:\"/registry/jobs/gcp-auth/gcp-auth-certs-patch\" value_size:3080 >> failure:<request_range:<key:\"/registry/jobs/gcp-auth/gcp-auth-certs-patch\" > >"}
	{"level":"info","ts":"2025-04-07T12:15:53.214784Z","caller":"traceutil/trace.go:171","msg":"trace[929543925] linearizableReadLoop","detail":"{readStateIndex:1090; appliedIndex:1088; }","duration":"329.239703ms","start":"2025-04-07T12:15:52.885529Z","end":"2025-04-07T12:15:53.214769Z","steps":["trace[929543925] 'read index received'  (duration: 322.179767ms)","trace[929543925] 'applied index is now lower than readState.Index'  (duration: 7.059117ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-07T12:15:53.215030Z","caller":"traceutil/trace.go:171","msg":"trace[1284713549] transaction","detail":"{read_only:false; response_revision:1059; number_of_response:1; }","duration":"363.559206ms","start":"2025-04-07T12:15:52.851460Z","end":"2025-04-07T12:15:53.215020Z","steps":["trace[1284713549] 'process raft request'  (duration: 363.192651ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T12:15:53.215102Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-07T12:15:52.851437Z","time spent":"363.614652ms","remote":"127.0.0.1:37512","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3133,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/jobs/gcp-auth/gcp-auth-certs-create\" mod_revision:808 > success:<request_put:<key:\"/registry/jobs/gcp-auth/gcp-auth-certs-create\" value_size:3080 >> failure:<request_range:<key:\"/registry/jobs/gcp-auth/gcp-auth-certs-create\" > >"}
	{"level":"warn","ts":"2025-04-07T12:15:53.215216Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"329.703025ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-07T12:15:53.215235Z","caller":"traceutil/trace.go:171","msg":"trace[956906764] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1059; }","duration":"329.744161ms","start":"2025-04-07T12:15:52.885485Z","end":"2025-04-07T12:15:53.215229Z","steps":["trace[956906764] 'agreement among raft nodes before linearized reading'  (duration: 329.70883ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T12:15:53.215269Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-07T12:15:52.885465Z","time spent":"329.780517ms","remote":"127.0.0.1:37470","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-04-07T12:15:53.215414Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.859233ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-07T12:15:53.215433Z","caller":"traceutil/trace.go:171","msg":"trace[1719669524] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1059; }","duration":"159.920501ms","start":"2025-04-07T12:15:53.055507Z","end":"2025-04-07T12:15:53.215428Z","steps":["trace[1719669524] 'agreement among raft nodes before linearized reading'  (duration: 159.886757ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T12:15:55.532207Z","caller":"traceutil/trace.go:171","msg":"trace[272794920] linearizableReadLoop","detail":"{readStateIndex:1098; appliedIndex:1097; }","duration":"230.778144ms","start":"2025-04-07T12:15:55.301409Z","end":"2025-04-07T12:15:55.532187Z","steps":["trace[272794920] 'read index received'  (duration: 230.621802ms)","trace[272794920] 'applied index is now lower than readState.Index'  (duration: 155.641µs)"],"step_count":2}
	{"level":"warn","ts":"2025-04-07T12:15:55.532489Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-07T12:15:55.207364Z","time spent":"325.121551ms","remote":"127.0.0.1:37318","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2025-04-07T12:15:55.532583Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"231.127186ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-07T12:15:55.532625Z","caller":"traceutil/trace.go:171","msg":"trace[2019041923] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1066; }","duration":"231.297605ms","start":"2025-04-07T12:15:55.301319Z","end":"2025-04-07T12:15:55.532617Z","steps":["trace[2019041923] 'agreement among raft nodes before linearized reading'  (duration: 231.171696ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T12:15:55.532800Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.835187ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-07T12:15:55.532846Z","caller":"traceutil/trace.go:171","msg":"trace[437953790] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1066; }","duration":"168.88459ms","start":"2025-04-07T12:15:55.363953Z","end":"2025-04-07T12:15:55.532837Z","steps":["trace[437953790] 'agreement among raft nodes before linearized reading'  (duration: 168.823277ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T12:15:55.533182Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"148.174177ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-07T12:15:55.533280Z","caller":"traceutil/trace.go:171","msg":"trace[1642306323] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1066; }","duration":"148.29735ms","start":"2025-04-07T12:15:55.384973Z","end":"2025-04-07T12:15:55.533270Z","steps":["trace[1642306323] 'agreement among raft nodes before linearized reading'  (duration: 148.142375ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T12:15:55.533564Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.72836ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-07T12:15:55.533881Z","caller":"traceutil/trace.go:171","msg":"trace[1088275689] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1066; }","duration":"119.03602ms","start":"2025-04-07T12:15:55.414828Z","end":"2025-04-07T12:15:55.533864Z","steps":["trace[1088275689] 'agreement among raft nodes before linearized reading'  (duration: 118.167829ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T12:18:08.290032Z","caller":"traceutil/trace.go:171","msg":"trace[1267076866] transaction","detail":"{read_only:false; response_revision:1702; number_of_response:1; }","duration":"381.804177ms","start":"2025-04-07T12:18:07.908212Z","end":"2025-04-07T12:18:08.290016Z","steps":["trace[1267076866] 'process raft request'  (duration: 381.688748ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T12:18:08.290265Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-07T12:18:07.908198Z","time spent":"381.986068ms","remote":"127.0.0.1:37440","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1700 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> kernel <==
	 12:20:15 up 6 min,  0 users,  load average: 0.39, 1.27, 0.75
	Linux addons-660533 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [39cf766345f56567cf4dbeec431a600fed773a53c54f951dc4ed85849594b419] <==
	I0407 12:15:17.974915       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0407 12:17:16.048925       1 conn.go:339] Error on socket receive: read tcp 192.168.39.112:8443->192.168.39.1:33626: use of closed network connection
	E0407 12:17:16.289942       1 conn.go:339] Error on socket receive: read tcp 192.168.39.112:8443->192.168.39.1:33634: use of closed network connection
	I0407 12:17:25.931754       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.118.76"}
	I0407 12:17:52.580888       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0407 12:17:52.809613       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.4.232"}
	I0407 12:17:55.074744       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0407 12:17:55.455916       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	W0407 12:17:56.211919       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0407 12:18:18.848597       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0407 12:18:18.976243       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0407 12:18:27.429939       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0407 12:18:27.430008       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0407 12:18:27.521109       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0407 12:18:27.521175       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0407 12:18:27.560095       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0407 12:18:27.560169       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0407 12:18:27.581117       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0407 12:18:27.582750       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0407 12:18:27.610858       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0407 12:18:27.610934       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0407 12:18:28.581419       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0407 12:18:28.611461       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0407 12:18:28.619430       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	I0407 12:20:13.807943       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.28.234"}
	
	
	==> kube-controller-manager [7dc48e7e75c9e801e97c86226392b2d7bc20f04dd75ce60a9838365a3bcd2563] <==
	E0407 12:19:37.887494       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0407 12:19:39.975505       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0407 12:19:39.976555       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0407 12:19:39.977412       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0407 12:19:39.977441       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0407 12:19:56.476602       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0407 12:19:56.477821       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0407 12:19:56.479198       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0407 12:19:56.479293       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0407 12:20:10.808637       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0407 12:20:10.810018       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0407 12:20:10.811743       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0407 12:20:10.811823       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0407 12:20:12.990201       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0407 12:20:12.991671       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0407 12:20:12.992551       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0407 12:20:12.992644       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0407 12:20:13.640851       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="55.67481ms"
	I0407 12:20:13.662143       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="21.222742ms"
	I0407 12:20:13.662255       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="63.698µs"
	I0407 12:20:13.670523       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="44.249µs"
	W0407 12:20:14.004697       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0407 12:20:14.006840       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0407 12:20:14.008349       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0407 12:20:14.008721       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [36009d84dfaa2e7bfce45b75cffbd14623fa35a7f503842c2b8e1e61e0ba7bb5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0407 12:14:33.748064       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0407 12:14:33.768045       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.112"]
	E0407 12:14:33.768131       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0407 12:14:33.908422       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0407 12:14:33.908457       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0407 12:14:33.908480       1 server_linux.go:170] "Using iptables Proxier"
	I0407 12:14:33.913780       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0407 12:14:33.914100       1 server.go:497] "Version info" version="v1.32.2"
	I0407 12:14:33.914116       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0407 12:14:33.916852       1 config.go:199] "Starting service config controller"
	I0407 12:14:33.916880       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0407 12:14:33.916903       1 config.go:105] "Starting endpoint slice config controller"
	I0407 12:14:33.916907       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0407 12:14:33.917241       1 config.go:329] "Starting node config controller"
	I0407 12:14:33.917247       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0407 12:14:34.017079       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0407 12:14:34.017142       1 shared_informer.go:320] Caches are synced for service config
	I0407 12:14:34.017425       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4088afaa05116fc506e5bcf70a9e01c9215d43501104cb7db11b617496b976d3] <==
	W0407 12:14:23.940999       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0407 12:14:23.941057       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:14:23.988829       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0407 12:14:23.988890       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0407 12:14:24.057621       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0407 12:14:24.057705       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:14:24.087659       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0407 12:14:24.087715       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:14:24.143807       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0407 12:14:24.143867       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 12:14:24.209699       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0407 12:14:24.209886       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 12:14:24.399490       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0407 12:14:24.399559       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0407 12:14:24.410690       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0407 12:14:24.410836       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:14:24.425633       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0407 12:14:24.425685       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:14:24.533757       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0407 12:14:24.533809       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0407 12:14:24.554529       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0407 12:14:24.554663       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0407 12:14:24.579956       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0407 12:14:24.580008       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0407 12:14:27.713815       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 07 12:19:26 addons-660533 kubelet[1224]: E0407 12:19:26.231680    1224 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744028366231216130,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 12:19:26 addons-660533 kubelet[1224]: E0407 12:19:26.231726    1224 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744028366231216130,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 12:19:34 addons-660533 kubelet[1224]: I0407 12:19:34.954720    1224 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-vhjpx" secret="" err="secret \"gcp-auth\" not found"
	Apr 07 12:19:36 addons-660533 kubelet[1224]: E0407 12:19:36.235000    1224 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744028376234661558,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 12:19:36 addons-660533 kubelet[1224]: E0407 12:19:36.235043    1224 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744028376234661558,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 12:19:46 addons-660533 kubelet[1224]: E0407 12:19:46.237983    1224 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744028386237569018,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 12:19:46 addons-660533 kubelet[1224]: E0407 12:19:46.238488    1224 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744028386237569018,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 12:19:56 addons-660533 kubelet[1224]: E0407 12:19:56.241513    1224 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744028396240875560,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 12:19:56 addons-660533 kubelet[1224]: E0407 12:19:56.241903    1224 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744028396240875560,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 12:19:56 addons-660533 kubelet[1224]: I0407 12:19:56.954104    1224 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Apr 07 12:20:06 addons-660533 kubelet[1224]: E0407 12:20:06.245089    1224 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744028406244636405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 12:20:06 addons-660533 kubelet[1224]: E0407 12:20:06.245706    1224 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744028406244636405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 12:20:13 addons-660533 kubelet[1224]: I0407 12:20:13.631090    1224 memory_manager.go:355] "RemoveStaleState removing state" podUID="325cb036-4c02-4938-a0f0-36d27f633ff8" containerName="liveness-probe"
	Apr 07 12:20:13 addons-660533 kubelet[1224]: I0407 12:20:13.631729    1224 memory_manager.go:355] "RemoveStaleState removing state" podUID="325cb036-4c02-4938-a0f0-36d27f633ff8" containerName="hostpath"
	Apr 07 12:20:13 addons-660533 kubelet[1224]: I0407 12:20:13.631885    1224 memory_manager.go:355] "RemoveStaleState removing state" podUID="a116b593-276f-42e4-87c0-feb92ec4e669" containerName="csi-attacher"
	Apr 07 12:20:13 addons-660533 kubelet[1224]: I0407 12:20:13.631968    1224 memory_manager.go:355] "RemoveStaleState removing state" podUID="0cae20f8-5a1f-4689-9981-0459d2124509" containerName="csi-resizer"
	Apr 07 12:20:13 addons-660533 kubelet[1224]: I0407 12:20:13.632037    1224 memory_manager.go:355] "RemoveStaleState removing state" podUID="325cb036-4c02-4938-a0f0-36d27f633ff8" containerName="csi-snapshotter"
	Apr 07 12:20:13 addons-660533 kubelet[1224]: I0407 12:20:13.632069    1224 memory_manager.go:355] "RemoveStaleState removing state" podUID="b9c50f5a-77e6-4e0a-bf6b-274f57699e6f" containerName="volume-snapshot-controller"
	Apr 07 12:20:13 addons-660533 kubelet[1224]: I0407 12:20:13.632137    1224 memory_manager.go:355] "RemoveStaleState removing state" podUID="325cb036-4c02-4938-a0f0-36d27f633ff8" containerName="node-driver-registrar"
	Apr 07 12:20:13 addons-660533 kubelet[1224]: I0407 12:20:13.632179    1224 memory_manager.go:355] "RemoveStaleState removing state" podUID="6ce69134-1e55-4b86-8f23-2b447d7ebd75" containerName="task-pv-container"
	Apr 07 12:20:13 addons-660533 kubelet[1224]: I0407 12:20:13.632244    1224 memory_manager.go:355] "RemoveStaleState removing state" podUID="eb1982a3-5e88-430a-b0c1-97a97d352dfb" containerName="local-path-provisioner"
	Apr 07 12:20:13 addons-660533 kubelet[1224]: I0407 12:20:13.632277    1224 memory_manager.go:355] "RemoveStaleState removing state" podUID="f8ea4adb-6b48-49a3-9b0a-30d3dbc78173" containerName="volume-snapshot-controller"
	Apr 07 12:20:13 addons-660533 kubelet[1224]: I0407 12:20:13.632362    1224 memory_manager.go:355] "RemoveStaleState removing state" podUID="325cb036-4c02-4938-a0f0-36d27f633ff8" containerName="csi-provisioner"
	Apr 07 12:20:13 addons-660533 kubelet[1224]: I0407 12:20:13.632599    1224 memory_manager.go:355] "RemoveStaleState removing state" podUID="325cb036-4c02-4938-a0f0-36d27f633ff8" containerName="csi-external-health-monitor-controller"
	Apr 07 12:20:13 addons-660533 kubelet[1224]: I0407 12:20:13.774595    1224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjkw5\" (UniqueName: \"kubernetes.io/projected/ee2753d1-8149-4065-9c15-a3dd1d83228b-kube-api-access-cjkw5\") pod \"hello-world-app-7d9564db4-9cvqm\" (UID: \"ee2753d1-8149-4065-9c15-a3dd1d83228b\") " pod="default/hello-world-app-7d9564db4-9cvqm"
	
	
	==> storage-provisioner [3a51d1d5086ed0e6633c86ce82155ae85195517f7200964c19422b9744db9ff9] <==
	I0407 12:14:40.073187       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0407 12:14:40.101684       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0407 12:14:40.101752       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0407 12:14:40.121867       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0407 12:14:40.122060       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-660533_c0cb4518-9785-40ba-a656-166e291c0960!
	I0407 12:14:40.130126       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8a39eadc-70ac-45ea-9196-b540447a58d3", APIVersion:"v1", ResourceVersion:"650", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-660533_c0cb4518-9785-40ba-a656-166e291c0960 became leader
	I0407 12:14:40.223471       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-660533_c0cb4518-9785-40ba-a656-166e291c0960!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-660533 -n addons-660533
helpers_test.go:261: (dbg) Run:  kubectl --context addons-660533 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-7d9564db4-9cvqm ingress-nginx-admission-create-qmb8v ingress-nginx-admission-patch-788g7
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-660533 describe pod hello-world-app-7d9564db4-9cvqm ingress-nginx-admission-create-qmb8v ingress-nginx-admission-patch-788g7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-660533 describe pod hello-world-app-7d9564db4-9cvqm ingress-nginx-admission-create-qmb8v ingress-nginx-admission-patch-788g7: exit status 1 (98.943231ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-7d9564db4-9cvqm
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-660533/192.168.39.112
	Start Time:       Mon, 07 Apr 2025 12:20:13 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=7d9564db4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-7d9564db4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cjkw5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-cjkw5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-7d9564db4-9cvqm to addons-660533
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	  Normal  Pulled     0s    kubelet            Successfully pulled image "docker.io/kicbase/echo-server:1.0" in 2.212s (2.212s including waiting). Image size: 4944818 bytes.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-qmb8v" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-788g7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-660533 describe pod hello-world-app-7d9564db4-9cvqm ingress-nginx-admission-create-qmb8v ingress-nginx-admission-patch-788g7: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-660533 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-660533 addons disable ingress-dns --alsologtostderr -v=1: (1.808217536s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-660533 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-660533 addons disable ingress --alsologtostderr -v=1: (7.92956509s)
--- FAIL: TestAddons/parallel/Ingress (153.98s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (208.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [16ae952f-9e64-4b01-ad75-b556b112cb03] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003844745s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-728898 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-728898 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-728898 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-728898 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f09248cf-5ce0-48a2-ba13-4505705dccf4] Pending
helpers_test.go:344: "sp-pod" [f09248cf-5ce0-48a2-ba13-4505705dccf4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f09248cf-5ce0-48a2-ba13-4505705dccf4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.005316148s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-728898 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-728898 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-728898 delete -f testdata/storage-provisioner/pod.yaml: (1.112708467s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-728898 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [59ccb8e9-51d3-4535-9029-fdfba6756cdc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0407 12:32:04.915416 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-728898 -n functional-728898
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-04-07 12:34:35.621169203 +0000 UTC m=+1271.386080839
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-728898 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-728898 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-728898/192.168.39.151
Start Time:       Mon, 07 Apr 2025 12:31:35 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.15
IPs:
IP:  10.244.0.15
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6cmgj (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-6cmgj:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  3m                 default-scheduler  Successfully assigned default/sp-pod to functional-728898
Warning  Failed     104s               kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    104s               kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     104s               kubelet            Error: ImagePullBackOff
Normal   Pulling    93s (x2 over 3m)   kubelet            Pulling image "docker.io/nginx"
Warning  Failed     2s (x2 over 104s)  kubelet            Error: ErrImagePull
Warning  Failed     2s                 kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:54809b2f36d0ff38e8e5362b0239779e4b75c2f19ad70ef047ed050f01506bb4 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-728898 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-728898 logs sp-pod -n default: exit status 1 (78.266437ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-728898 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-728898 -n functional-728898
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-728898 logs -n 25: (1.30599699s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-728898 ssh stat                                               | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC | 07 Apr 25 12:32 UTC |
	|                | /mount-9p/created-by-pod                                                 |                   |         |         |                     |                     |
	| ssh            | functional-728898 ssh sudo                                               | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC | 07 Apr 25 12:32 UTC |
	|                | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| mount          | -p functional-728898                                                     | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdspecific-port2444163506/001:/mount-9p |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1 --port 46464                                      |                   |         |         |                     |                     |
	| ssh            | functional-728898 ssh findmnt                                            | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC |                     |
	|                | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh            | functional-728898 ssh findmnt                                            | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC | 07 Apr 25 12:32 UTC |
	|                | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh            | functional-728898 ssh -- ls                                              | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC | 07 Apr 25 12:32 UTC |
	|                | -la /mount-9p                                                            |                   |         |         |                     |                     |
	| ssh            | functional-728898 ssh sudo                                               | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC |                     |
	|                | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| ssh            | functional-728898 ssh findmnt                                            | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC |                     |
	|                | -T /mount1                                                               |                   |         |         |                     |                     |
	| mount          | -p functional-728898                                                     | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2559023673/001:/mount2   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-728898                                                     | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2559023673/001:/mount1   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-728898                                                     | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2559023673/001:/mount3   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh            | functional-728898 ssh findmnt                                            | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC | 07 Apr 25 12:32 UTC |
	|                | -T /mount1                                                               |                   |         |         |                     |                     |
	| ssh            | functional-728898 ssh findmnt                                            | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC | 07 Apr 25 12:32 UTC |
	|                | -T /mount2                                                               |                   |         |         |                     |                     |
	| ssh            | functional-728898 ssh findmnt                                            | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC | 07 Apr 25 12:32 UTC |
	|                | -T /mount3                                                               |                   |         |         |                     |                     |
	| mount          | -p functional-728898                                                     | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC |                     |
	|                | --kill=true                                                              |                   |         |         |                     |                     |
	| image          | functional-728898                                                        | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC | 07 Apr 25 12:32 UTC |
	|                | image ls --format short                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| image          | functional-728898                                                        | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC | 07 Apr 25 12:32 UTC |
	|                | image ls --format yaml                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| ssh            | functional-728898 ssh pgrep                                              | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC |                     |
	|                | buildkitd                                                                |                   |         |         |                     |                     |
	| image          | functional-728898 image build -t                                         | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC | 07 Apr 25 12:32 UTC |
	|                | localhost/my-image:functional-728898                                     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                         |                   |         |         |                     |                     |
	| image          | functional-728898 image ls                                               | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC | 07 Apr 25 12:32 UTC |
	| image          | functional-728898                                                        | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC | 07 Apr 25 12:32 UTC |
	|                | image ls --format json                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| image          | functional-728898                                                        | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC | 07 Apr 25 12:32 UTC |
	|                | image ls --format table                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| update-context | functional-728898                                                        | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC | 07 Apr 25 12:32 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| update-context | functional-728898                                                        | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC | 07 Apr 25 12:32 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| update-context | functional-728898                                                        | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC | 07 Apr 25 12:32 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 12:31:27
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 12:31:27.823336 1179726 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:31:27.823511 1179726 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:31:27.823530 1179726 out.go:358] Setting ErrFile to fd 2...
	I0407 12:31:27.823537 1179726 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:31:27.823874 1179726 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1162386/.minikube/bin
	I0407 12:31:27.824512 1179726 out.go:352] Setting JSON to false
	I0407 12:31:27.825851 1179726 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":15232,"bootTime":1744013856,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 12:31:27.825986 1179726 start.go:139] virtualization: kvm guest
	I0407 12:31:27.828807 1179726 out.go:177] * [functional-728898] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0407 12:31:27.831661 1179726 notify.go:220] Checking for updates...
	I0407 12:31:27.831689 1179726 out.go:177]   - MINIKUBE_LOCATION=20602
	I0407 12:31:27.835776 1179726 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:31:27.839208 1179726 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20602-1162386/kubeconfig
	I0407 12:31:27.841222 1179726 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1162386/.minikube
	I0407 12:31:27.843251 1179726 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 12:31:27.845401 1179726 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 12:31:27.847889 1179726 config.go:182] Loaded profile config "functional-728898": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 12:31:27.848422 1179726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:31:27.848495 1179726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:31:27.872082 1179726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43461
	I0407 12:31:27.872734 1179726 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:31:27.873397 1179726 main.go:141] libmachine: Using API Version  1
	I0407 12:31:27.873425 1179726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:31:27.873935 1179726 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:31:27.874322 1179726 main.go:141] libmachine: (functional-728898) Calling .DriverName
	I0407 12:31:27.874754 1179726 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:31:27.875323 1179726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:31:27.875388 1179726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:31:27.897668 1179726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35575
	I0407 12:31:27.898307 1179726 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:31:27.899292 1179726 main.go:141] libmachine: Using API Version  1
	I0407 12:31:27.899465 1179726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:31:27.900064 1179726 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:31:27.900462 1179726 main.go:141] libmachine: (functional-728898) Calling .DriverName
	I0407 12:31:27.944079 1179726 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0407 12:31:27.946374 1179726 start.go:297] selected driver: kvm2
	I0407 12:31:27.946409 1179726 start.go:901] validating driver "kvm2" against &{Name:functional-728898 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-728898 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.151 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:31:27.946543 1179726 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 12:31:27.949744 1179726 out.go:201] 
	W0407 12:31:27.952054 1179726 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0407 12:31:27.953828 1179726 out.go:201] 
	
	
	==> CRI-O <==
	Apr 07 12:34:36 functional-728898 crio[4913]: time="2025-04-07 12:34:36.484143867Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744029276484118588,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:294016,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=019efe4c-da90-4845-a21e-97bc5627cc17 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 12:34:36 functional-728898 crio[4913]: time="2025-04-07 12:34:36.484789195Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f05fdb75-8258-4ef1-86ad-fa7aad17b02a name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 12:34:36 functional-728898 crio[4913]: time="2025-04-07 12:34:36.484841429Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f05fdb75-8258-4ef1-86ad-fa7aad17b02a name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 12:34:36 functional-728898 crio[4913]: time="2025-04-07 12:34:36.485381402Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61cfcaeb35143eb0de95458d3dc252bbfb821bc28f3272f149231a0ee0197135,PodSandboxId:8308802280b4a5804fc9617d78904a82c0e876eb9592537b7dcc7b749f9f7b3f,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1744029179956536158,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-hchz9,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4a9d61d0-b68a-4d16-863d-ca4fe7e10fa6,},Annotations:map[string]string{io.kub
ernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79e6ae6b95171565d73c07c01490487ea7fee11ba11f7e6a30a204b86335a407,PodSandboxId:0433f6fe349c330301c1bcfbecdbccfac0a3f532893bcdadc5bfd9526ec100d2,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1744029177103675333,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-wcf49,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: d7735f03-dec7-4187-bb41-90c619473df0,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac976c79c64830e17267759075a44ad7a7741506cb370dd4507b325c9883b76,PodSandboxId:f4c1f4934a89650182c3ee05d95be1c6928f58fa9bca64f02bc33930b7548f8c,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1744029139668414225,Labels:map[string]string{io.
kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ce03bda2-e049-4c16-bc7f-d21929aa75ce,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45c2dffaca28cb25ab301b0b159e9c767eb22331ef283bf92cdae10b61c8817c,PodSandboxId:7e2f3b699d4fb9be005486c9871944617cd10bc523ba3434f68dfa4230035c6c,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1744029086551862239,Labels:map[string]string{io.kubernetes.container.name: echoserver,
io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-xlzk8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 238652ef-7f48-42be-ab1b-58c79e679abd,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ce4150e5ca64bb249585aa874ac2c0b3b5124c0074c0a1af7fde49ba07e1b1f,PodSandboxId:216bf89ff12fac735b99994741760aef9fc0b788d353079aae87be9724ce26b3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744029078887662024,Labels:map[string]string{io.kubernetes.container.name: ngi
nx,io.kubernetes.pod.name: nginx-svc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a5f880a-151c-4e01-a038-26cdad8a4086,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7452ca27af16f51d13a5bf8e5a81261c8c290c5116c5e3ffd0be0e33e6f32287,PodSandboxId:1e6bf2f675ca0671acae94bcb950f153d6d57fec44aa605a312bd2739e52e6ce,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1744029074691418
251,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-hq2l4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9cd43b15-5de0-4b18-93e0-1a20605b3d3b,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d56da9bec71eb1199a87233f6976d8c3f87e73cdc0e336936406ffa0bc414016,PodSandboxId:250183d67c0abdfbbca1e7baf050e092bd026133834d085970941362166396da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744029049854504044,Labels:map[string]string{i
o.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-hlndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7057602-1190-4a5f-90ea-148e2d5babf4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c6013453428c3c541fcfd3498084ea17cff6776f774569e24a2a5d87d36ae99,PodSandboxId:f04c38eadb19133055219ae51e4e174500343871db2f4dfc1bb20ada640e2053,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744029049401719544,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-52vhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 752fe55a-003a-40fa-a1a6-dd762db82e0c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9930e272f17b71e45f60674688fe7b134094010f20529b2bbc81518611620890,PodSandboxId:cde9f0abcc1c797e36a430a55d1e94b0ef37b148
9754e97a4bf96d406dc5d4a5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744029049217840356,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16ae952f-9e64-4b01-ad75-b556b112cb03,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad18f7874f75273858975f81e17ce0d514bc3c3c0663b789c0cf9b97d6c5db9a,PodSandboxId:054db3ce1b0abc5de544916f4dac644f5cff1656e8a24d6a9d2f
adad44a09d61,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744029047886556263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-48r4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04060509-a7f2-4be6-b807-c709c6c1e3eb,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d400b079c78d32546aa75b8827e7628204648f23b4d6286f7300db0f84913f6,PodSandboxId:b034bf0e0dd3013ae0b2011c772deded3e4ce2a27790d0c4c3a6e6d1cbd44c99,Metadata:&ContainerMe
tadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744029036307272948,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5068eeb41b70734dc3c2019b8280bc0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3121037762e1f6146a3c087b19535c589d603013d08a88f808c08e4ce645b11,PodSandboxId:257cef0f19a3ed52979effb0faa63323b90ae964f588e4b321504fcfc77acb09,Metadata:&ContainerMetadata{Name:k
ube-scheduler,Attempt:3,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744029036288817771,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65277f868c895be86e767f8d62913710,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5dbf84a635343ac9efc499953db9a64f8df84e22403f3a89220b3c6e7de8128,PodSandboxId:ec3d7895c8a2a83d7dbe96b7dfaa1ae6a8f299a9e4094bec221d230cfd077793,Metadata:&ContainerMetadata{Name:etcd,Attempt:3
,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744029036252669290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9baea30e9eb118c89ddd369d5e0d1bd6,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7486c26ef9200d679640eddc4b175bc9720bddd11b0a617684cb0f38e2f2290a,PodSandboxId:d836f50a336ae77a193527ba127a02d01931d05e46193846e9b9dfdc35d5f99e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Imag
e:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744029036246488459,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d592d768ff54d22d51886bcc5507a7a8,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4448b2213cb7fb1f4a035fa143c4303e3f68cedd25140b0278531edf8fa55a1f,PodSandboxId:8c1c591e2b17c9eba2bf79646fd6152edc2905e781fd186bc7dc497ebe082208,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec
{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1744028774875808632,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5068eeb41b70734dc3c2019b8280bc0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f05fdb75-8258-4ef1-86ad-fa7aad17b02a name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 12:34:36 functional-728898 crio[4913]: time="2025-04-07 12:34:36.530564118Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=51795591-c875-4117-961e-a8b6282b0ef4 name=/runtime.v1.RuntimeService/Version
	Apr 07 12:34:36 functional-728898 crio[4913]: time="2025-04-07 12:34:36.530639284Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=51795591-c875-4117-961e-a8b6282b0ef4 name=/runtime.v1.RuntimeService/Version
	Apr 07 12:34:36 functional-728898 crio[4913]: time="2025-04-07 12:34:36.532005321Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c8b0b030-e6fd-4c3f-b1d0-5e91ad073775 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 12:34:36 functional-728898 crio[4913]: time="2025-04-07 12:34:36.533794944Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744029276533765488,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:294016,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c8b0b030-e6fd-4c3f-b1d0-5e91ad073775 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 12:34:36 functional-728898 crio[4913]: time="2025-04-07 12:34:36.534452448Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=31d7b3be-05c0-4307-8b6b-86afe3b1ba60 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 12:34:36 functional-728898 crio[4913]: time="2025-04-07 12:34:36.534531221Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=31d7b3be-05c0-4307-8b6b-86afe3b1ba60 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 12:34:36 functional-728898 crio[4913]: time="2025-04-07 12:34:36.534840144Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61cfcaeb35143eb0de95458d3dc252bbfb821bc28f3272f149231a0ee0197135,PodSandboxId:8308802280b4a5804fc9617d78904a82c0e876eb9592537b7dcc7b749f9f7b3f,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1744029179956536158,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-hchz9,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4a9d61d0-b68a-4d16-863d-ca4fe7e10fa6,},Annotations:map[string]string{io.kub
ernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79e6ae6b95171565d73c07c01490487ea7fee11ba11f7e6a30a204b86335a407,PodSandboxId:0433f6fe349c330301c1bcfbecdbccfac0a3f532893bcdadc5bfd9526ec100d2,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1744029177103675333,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-wcf49,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: d7735f03-dec7-4187-bb41-90c619473df0,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac976c79c64830e17267759075a44ad7a7741506cb370dd4507b325c9883b76,PodSandboxId:f4c1f4934a89650182c3ee05d95be1c6928f58fa9bca64f02bc33930b7548f8c,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1744029139668414225,Labels:map[string]string{io.
kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ce03bda2-e049-4c16-bc7f-d21929aa75ce,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45c2dffaca28cb25ab301b0b159e9c767eb22331ef283bf92cdae10b61c8817c,PodSandboxId:7e2f3b699d4fb9be005486c9871944617cd10bc523ba3434f68dfa4230035c6c,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1744029086551862239,Labels:map[string]string{io.kubernetes.container.name: echoserver,
io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-xlzk8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 238652ef-7f48-42be-ab1b-58c79e679abd,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ce4150e5ca64bb249585aa874ac2c0b3b5124c0074c0a1af7fde49ba07e1b1f,PodSandboxId:216bf89ff12fac735b99994741760aef9fc0b788d353079aae87be9724ce26b3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744029078887662024,Labels:map[string]string{io.kubernetes.container.name: ngi
nx,io.kubernetes.pod.name: nginx-svc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a5f880a-151c-4e01-a038-26cdad8a4086,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7452ca27af16f51d13a5bf8e5a81261c8c290c5116c5e3ffd0be0e33e6f32287,PodSandboxId:1e6bf2f675ca0671acae94bcb950f153d6d57fec44aa605a312bd2739e52e6ce,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1744029074691418
251,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-hq2l4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9cd43b15-5de0-4b18-93e0-1a20605b3d3b,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d56da9bec71eb1199a87233f6976d8c3f87e73cdc0e336936406ffa0bc414016,PodSandboxId:250183d67c0abdfbbca1e7baf050e092bd026133834d085970941362166396da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744029049854504044,Labels:map[string]string{i
o.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-hlndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7057602-1190-4a5f-90ea-148e2d5babf4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c6013453428c3c541fcfd3498084ea17cff6776f774569e24a2a5d87d36ae99,PodSandboxId:f04c38eadb19133055219ae51e4e174500343871db2f4dfc1bb20ada640e2053,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744029049401719544,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-52vhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 752fe55a-003a-40fa-a1a6-dd762db82e0c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9930e272f17b71e45f60674688fe7b134094010f20529b2bbc81518611620890,PodSandboxId:cde9f0abcc1c797e36a430a55d1e94b0ef37b148
9754e97a4bf96d406dc5d4a5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744029049217840356,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16ae952f-9e64-4b01-ad75-b556b112cb03,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad18f7874f75273858975f81e17ce0d514bc3c3c0663b789c0cf9b97d6c5db9a,PodSandboxId:054db3ce1b0abc5de544916f4dac644f5cff1656e8a24d6a9d2f
adad44a09d61,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744029047886556263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-48r4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04060509-a7f2-4be6-b807-c709c6c1e3eb,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d400b079c78d32546aa75b8827e7628204648f23b4d6286f7300db0f84913f6,PodSandboxId:b034bf0e0dd3013ae0b2011c772deded3e4ce2a27790d0c4c3a6e6d1cbd44c99,Metadata:&ContainerMe
tadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744029036307272948,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5068eeb41b70734dc3c2019b8280bc0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3121037762e1f6146a3c087b19535c589d603013d08a88f808c08e4ce645b11,PodSandboxId:257cef0f19a3ed52979effb0faa63323b90ae964f588e4b321504fcfc77acb09,Metadata:&ContainerMetadata{Name:k
ube-scheduler,Attempt:3,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744029036288817771,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65277f868c895be86e767f8d62913710,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5dbf84a635343ac9efc499953db9a64f8df84e22403f3a89220b3c6e7de8128,PodSandboxId:ec3d7895c8a2a83d7dbe96b7dfaa1ae6a8f299a9e4094bec221d230cfd077793,Metadata:&ContainerMetadata{Name:etcd,Attempt:3
,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744029036252669290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9baea30e9eb118c89ddd369d5e0d1bd6,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7486c26ef9200d679640eddc4b175bc9720bddd11b0a617684cb0f38e2f2290a,PodSandboxId:d836f50a336ae77a193527ba127a02d01931d05e46193846e9b9dfdc35d5f99e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Imag
e:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744029036246488459,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d592d768ff54d22d51886bcc5507a7a8,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4448b2213cb7fb1f4a035fa143c4303e3f68cedd25140b0278531edf8fa55a1f,PodSandboxId:8c1c591e2b17c9eba2bf79646fd6152edc2905e781fd186bc7dc497ebe082208,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec
{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1744028774875808632,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5068eeb41b70734dc3c2019b8280bc0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=31d7b3be-05c0-4307-8b6b-86afe3b1ba60 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 12:34:36 functional-728898 crio[4913]: time="2025-04-07 12:34:36.566841612Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0ae4c423-389a-4767-99a2-a914cb9c9878 name=/runtime.v1.RuntimeService/Version
	Apr 07 12:34:36 functional-728898 crio[4913]: time="2025-04-07 12:34:36.566918416Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0ae4c423-389a-4767-99a2-a914cb9c9878 name=/runtime.v1.RuntimeService/Version
	Apr 07 12:34:36 functional-728898 crio[4913]: time="2025-04-07 12:34:36.568588307Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4ab38a57-4242-42c6-84c5-ed5eadaf4927 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 12:34:36 functional-728898 crio[4913]: time="2025-04-07 12:34:36.569682007Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744029276569655078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:294016,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4ab38a57-4242-42c6-84c5-ed5eadaf4927 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 12:34:36 functional-728898 crio[4913]: time="2025-04-07 12:34:36.570253892Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2ff0a451-322e-4fad-91e8-6c289342bc8e name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 12:34:36 functional-728898 crio[4913]: time="2025-04-07 12:34:36.570306157Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2ff0a451-322e-4fad-91e8-6c289342bc8e name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 12:34:36 functional-728898 crio[4913]: time="2025-04-07 12:34:36.570706744Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61cfcaeb35143eb0de95458d3dc252bbfb821bc28f3272f149231a0ee0197135,PodSandboxId:8308802280b4a5804fc9617d78904a82c0e876eb9592537b7dcc7b749f9f7b3f,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1744029179956536158,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-hchz9,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4a9d61d0-b68a-4d16-863d-ca4fe7e10fa6,},Annotations:map[string]string{io.kub
ernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79e6ae6b95171565d73c07c01490487ea7fee11ba11f7e6a30a204b86335a407,PodSandboxId:0433f6fe349c330301c1bcfbecdbccfac0a3f532893bcdadc5bfd9526ec100d2,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1744029177103675333,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-wcf49,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: d7735f03-dec7-4187-bb41-90c619473df0,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac976c79c64830e17267759075a44ad7a7741506cb370dd4507b325c9883b76,PodSandboxId:f4c1f4934a89650182c3ee05d95be1c6928f58fa9bca64f02bc33930b7548f8c,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1744029139668414225,Labels:map[string]string{io.
kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ce03bda2-e049-4c16-bc7f-d21929aa75ce,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45c2dffaca28cb25ab301b0b159e9c767eb22331ef283bf92cdae10b61c8817c,PodSandboxId:7e2f3b699d4fb9be005486c9871944617cd10bc523ba3434f68dfa4230035c6c,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1744029086551862239,Labels:map[string]string{io.kubernetes.container.name: echoserver,
io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-xlzk8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 238652ef-7f48-42be-ab1b-58c79e679abd,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ce4150e5ca64bb249585aa874ac2c0b3b5124c0074c0a1af7fde49ba07e1b1f,PodSandboxId:216bf89ff12fac735b99994741760aef9fc0b788d353079aae87be9724ce26b3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744029078887662024,Labels:map[string]string{io.kubernetes.container.name: ngi
nx,io.kubernetes.pod.name: nginx-svc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a5f880a-151c-4e01-a038-26cdad8a4086,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7452ca27af16f51d13a5bf8e5a81261c8c290c5116c5e3ffd0be0e33e6f32287,PodSandboxId:1e6bf2f675ca0671acae94bcb950f153d6d57fec44aa605a312bd2739e52e6ce,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1744029074691418
251,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-hq2l4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9cd43b15-5de0-4b18-93e0-1a20605b3d3b,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d56da9bec71eb1199a87233f6976d8c3f87e73cdc0e336936406ffa0bc414016,PodSandboxId:250183d67c0abdfbbca1e7baf050e092bd026133834d085970941362166396da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744029049854504044,Labels:map[string]string{i
o.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-hlndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7057602-1190-4a5f-90ea-148e2d5babf4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c6013453428c3c541fcfd3498084ea17cff6776f774569e24a2a5d87d36ae99,PodSandboxId:f04c38eadb19133055219ae51e4e174500343871db2f4dfc1bb20ada640e2053,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744029049401719544,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-52vhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 752fe55a-003a-40fa-a1a6-dd762db82e0c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9930e272f17b71e45f60674688fe7b134094010f20529b2bbc81518611620890,PodSandboxId:cde9f0abcc1c797e36a430a55d1e94b0ef37b148
9754e97a4bf96d406dc5d4a5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744029049217840356,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16ae952f-9e64-4b01-ad75-b556b112cb03,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad18f7874f75273858975f81e17ce0d514bc3c3c0663b789c0cf9b97d6c5db9a,PodSandboxId:054db3ce1b0abc5de544916f4dac644f5cff1656e8a24d6a9d2f
adad44a09d61,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744029047886556263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-48r4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04060509-a7f2-4be6-b807-c709c6c1e3eb,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d400b079c78d32546aa75b8827e7628204648f23b4d6286f7300db0f84913f6,PodSandboxId:b034bf0e0dd3013ae0b2011c772deded3e4ce2a27790d0c4c3a6e6d1cbd44c99,Metadata:&ContainerMe
tadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744029036307272948,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5068eeb41b70734dc3c2019b8280bc0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3121037762e1f6146a3c087b19535c589d603013d08a88f808c08e4ce645b11,PodSandboxId:257cef0f19a3ed52979effb0faa63323b90ae964f588e4b321504fcfc77acb09,Metadata:&ContainerMetadata{Name:k
ube-scheduler,Attempt:3,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744029036288817771,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65277f868c895be86e767f8d62913710,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5dbf84a635343ac9efc499953db9a64f8df84e22403f3a89220b3c6e7de8128,PodSandboxId:ec3d7895c8a2a83d7dbe96b7dfaa1ae6a8f299a9e4094bec221d230cfd077793,Metadata:&ContainerMetadata{Name:etcd,Attempt:3
,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744029036252669290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9baea30e9eb118c89ddd369d5e0d1bd6,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7486c26ef9200d679640eddc4b175bc9720bddd11b0a617684cb0f38e2f2290a,PodSandboxId:d836f50a336ae77a193527ba127a02d01931d05e46193846e9b9dfdc35d5f99e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Imag
e:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744029036246488459,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d592d768ff54d22d51886bcc5507a7a8,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4448b2213cb7fb1f4a035fa143c4303e3f68cedd25140b0278531edf8fa55a1f,PodSandboxId:8c1c591e2b17c9eba2bf79646fd6152edc2905e781fd186bc7dc497ebe082208,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec
{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1744028774875808632,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5068eeb41b70734dc3c2019b8280bc0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2ff0a451-322e-4fad-91e8-6c289342bc8e name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 12:34:36 functional-728898 crio[4913]: time="2025-04-07 12:34:36.609204506Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e148a271-0f67-4461-8f81-4516ac98535b name=/runtime.v1.RuntimeService/Version
	Apr 07 12:34:36 functional-728898 crio[4913]: time="2025-04-07 12:34:36.609294091Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e148a271-0f67-4461-8f81-4516ac98535b name=/runtime.v1.RuntimeService/Version
	Apr 07 12:34:36 functional-728898 crio[4913]: time="2025-04-07 12:34:36.610464694Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=77443b63-1e35-4ad4-aa06-16de4e41b185 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 12:34:36 functional-728898 crio[4913]: time="2025-04-07 12:34:36.611510690Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744029276611486207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:294016,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=77443b63-1e35-4ad4-aa06-16de4e41b185 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 12:34:36 functional-728898 crio[4913]: time="2025-04-07 12:34:36.612142490Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b546dbae-3458-4349-9fc3-dcfe86817263 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 12:34:36 functional-728898 crio[4913]: time="2025-04-07 12:34:36.612191600Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b546dbae-3458-4349-9fc3-dcfe86817263 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 12:34:36 functional-728898 crio[4913]: time="2025-04-07 12:34:36.612689182Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61cfcaeb35143eb0de95458d3dc252bbfb821bc28f3272f149231a0ee0197135,PodSandboxId:8308802280b4a5804fc9617d78904a82c0e876eb9592537b7dcc7b749f9f7b3f,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1744029179956536158,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-hchz9,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4a9d61d0-b68a-4d16-863d-ca4fe7e10fa6,},Annotations:map[string]string{io.kub
ernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79e6ae6b95171565d73c07c01490487ea7fee11ba11f7e6a30a204b86335a407,PodSandboxId:0433f6fe349c330301c1bcfbecdbccfac0a3f532893bcdadc5bfd9526ec100d2,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1744029177103675333,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-wcf49,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: d7735f03-dec7-4187-bb41-90c619473df0,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac976c79c64830e17267759075a44ad7a7741506cb370dd4507b325c9883b76,PodSandboxId:f4c1f4934a89650182c3ee05d95be1c6928f58fa9bca64f02bc33930b7548f8c,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1744029139668414225,Labels:map[string]string{io.
kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ce03bda2-e049-4c16-bc7f-d21929aa75ce,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45c2dffaca28cb25ab301b0b159e9c767eb22331ef283bf92cdae10b61c8817c,PodSandboxId:7e2f3b699d4fb9be005486c9871944617cd10bc523ba3434f68dfa4230035c6c,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1744029086551862239,Labels:map[string]string{io.kubernetes.container.name: echoserver,
io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-xlzk8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 238652ef-7f48-42be-ab1b-58c79e679abd,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ce4150e5ca64bb249585aa874ac2c0b3b5124c0074c0a1af7fde49ba07e1b1f,PodSandboxId:216bf89ff12fac735b99994741760aef9fc0b788d353079aae87be9724ce26b3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744029078887662024,Labels:map[string]string{io.kubernetes.container.name: ngi
nx,io.kubernetes.pod.name: nginx-svc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a5f880a-151c-4e01-a038-26cdad8a4086,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7452ca27af16f51d13a5bf8e5a81261c8c290c5116c5e3ffd0be0e33e6f32287,PodSandboxId:1e6bf2f675ca0671acae94bcb950f153d6d57fec44aa605a312bd2739e52e6ce,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1744029074691418
251,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-hq2l4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9cd43b15-5de0-4b18-93e0-1a20605b3d3b,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d56da9bec71eb1199a87233f6976d8c3f87e73cdc0e336936406ffa0bc414016,PodSandboxId:250183d67c0abdfbbca1e7baf050e092bd026133834d085970941362166396da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744029049854504044,Labels:map[string]string{i
o.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-hlndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7057602-1190-4a5f-90ea-148e2d5babf4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c6013453428c3c541fcfd3498084ea17cff6776f774569e24a2a5d87d36ae99,PodSandboxId:f04c38eadb19133055219ae51e4e174500343871db2f4dfc1bb20ada640e2053,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744029049401719544,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-52vhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 752fe55a-003a-40fa-a1a6-dd762db82e0c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9930e272f17b71e45f60674688fe7b134094010f20529b2bbc81518611620890,PodSandboxId:cde9f0abcc1c797e36a430a55d1e94b0ef37b148
9754e97a4bf96d406dc5d4a5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744029049217840356,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16ae952f-9e64-4b01-ad75-b556b112cb03,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad18f7874f75273858975f81e17ce0d514bc3c3c0663b789c0cf9b97d6c5db9a,PodSandboxId:054db3ce1b0abc5de544916f4dac644f5cff1656e8a24d6a9d2f
adad44a09d61,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744029047886556263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-48r4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04060509-a7f2-4be6-b807-c709c6c1e3eb,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d400b079c78d32546aa75b8827e7628204648f23b4d6286f7300db0f84913f6,PodSandboxId:b034bf0e0dd3013ae0b2011c772deded3e4ce2a27790d0c4c3a6e6d1cbd44c99,Metadata:&ContainerMe
tadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744029036307272948,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5068eeb41b70734dc3c2019b8280bc0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3121037762e1f6146a3c087b19535c589d603013d08a88f808c08e4ce645b11,PodSandboxId:257cef0f19a3ed52979effb0faa63323b90ae964f588e4b321504fcfc77acb09,Metadata:&ContainerMetadata{Name:k
ube-scheduler,Attempt:3,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744029036288817771,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65277f868c895be86e767f8d62913710,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5dbf84a635343ac9efc499953db9a64f8df84e22403f3a89220b3c6e7de8128,PodSandboxId:ec3d7895c8a2a83d7dbe96b7dfaa1ae6a8f299a9e4094bec221d230cfd077793,Metadata:&ContainerMetadata{Name:etcd,Attempt:3
,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744029036252669290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9baea30e9eb118c89ddd369d5e0d1bd6,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7486c26ef9200d679640eddc4b175bc9720bddd11b0a617684cb0f38e2f2290a,PodSandboxId:d836f50a336ae77a193527ba127a02d01931d05e46193846e9b9dfdc35d5f99e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Imag
e:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744029036246488459,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d592d768ff54d22d51886bcc5507a7a8,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4448b2213cb7fb1f4a035fa143c4303e3f68cedd25140b0278531edf8fa55a1f,PodSandboxId:8c1c591e2b17c9eba2bf79646fd6152edc2905e781fd186bc7dc497ebe082208,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec
{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1744028774875808632,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5068eeb41b70734dc3c2019b8280bc0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b546dbae-3458-4349-9fc3-dcfe86817263 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	61cfcaeb35143       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   About a minute ago   Running             dashboard-metrics-scraper   0                   8308802280b4a       dashboard-metrics-scraper-5d59dccf9b-hchz9
	79e6ae6b95171       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         About a minute ago   Running             kubernetes-dashboard        0                   0433f6fe349c3       kubernetes-dashboard-7779f9b69b-wcf49
	3ac976c79c648       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              2 minutes ago        Exited              mount-munger                0                   f4c1f4934a896       busybox-mount
	45c2dffaca28c       82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410                                                 3 minutes ago        Running             echoserver                  0                   7e2f3b699d4fb       hello-node-connect-58f9cf68d8-xlzk8
	8ce4150e5ca64       docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591                  3 minutes ago        Running             nginx                       0                   216bf89ff12fa       nginx-svc
	7452ca27af16f       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               3 minutes ago        Running             echoserver                  0                   1e6bf2f675ca0       hello-node-fcfd88b6f-hq2l4
	d56da9bec71eb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 3 minutes ago        Running             coredns                     0                   250183d67c0ab       coredns-668d6bf9bc-hlndz
	6c6013453428c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 3 minutes ago        Running             coredns                     0                   f04c38eadb191       coredns-668d6bf9bc-52vhr
	9930e272f17b7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 3 minutes ago        Running             storage-provisioner         0                   cde9f0abcc1c7       storage-provisioner
	ad18f7874f752       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5                                                 3 minutes ago        Running             kube-proxy                  0                   054db3ce1b0ab       kube-proxy-48r4x
	1d400b079c78d       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef                                                 4 minutes ago        Running             kube-apiserver              1                   b034bf0e0dd30       kube-apiserver-functional-728898
	c3121037762e1       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d                                                 4 minutes ago        Running             kube-scheduler              3                   257cef0f19a3e       kube-scheduler-functional-728898
	a5dbf84a63534       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                 4 minutes ago        Running             etcd                        3                   ec3d7895c8a2a       etcd-functional-728898
	7486c26ef9200       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389                                                 4 minutes ago        Running             kube-controller-manager     3                   d836f50a336ae       kube-controller-manager-functional-728898
	4448b2213cb7f       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef                                                 8 minutes ago        Exited              kube-apiserver              0                   8c1c591e2b17c       kube-apiserver-functional-728898
	
	
	==> coredns [6c6013453428c3c541fcfd3498084ea17cff6776f774569e24a2a5d87d36ae99] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [d56da9bec71eb1199a87233f6976d8c3f87e73cdc0e336936406ffa0bc414016] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               functional-728898
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-728898
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=33e6edc58d2014d70e908473920ef4ac8eae1e43
	                    minikube.k8s.io/name=functional-728898
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_07T12_30_43_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Apr 2025 12:30:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-728898
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Apr 2025 12:34:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Apr 2025 12:33:15 +0000   Mon, 07 Apr 2025 12:30:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Apr 2025 12:33:15 +0000   Mon, 07 Apr 2025 12:30:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Apr 2025 12:33:15 +0000   Mon, 07 Apr 2025 12:30:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Apr 2025 12:33:15 +0000   Mon, 07 Apr 2025 12:30:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.151
	  Hostname:    functional-728898
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 a18e22468a7c4d53b3b481cf5d4ae418
	  System UUID:                a18e2246-8a7c-4d53-b3b4-81cf5d4ae418
	  Boot ID:                    f0284b9d-ade3-45a2-8f61-7ec448c9266a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-58f9cf68d8-xlzk8           0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m15s
	  default                     hello-node-fcfd88b6f-hq2l4                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m27s
	  default                     mysql-58ccfd96bb-lwjg6                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (18%)    3m8s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m24s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	  kube-system                 coredns-668d6bf9bc-52vhr                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     3m49s
	  kube-system                 coredns-668d6bf9bc-hlndz                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     3m49s
	  kube-system                 etcd-functional-728898                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         3m54s
	  kube-system                 kube-apiserver-functional-728898              250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m55s
	  kube-system                 kube-controller-manager-functional-728898     200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m54s
	  kube-system                 kube-proxy-48r4x                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  kube-system                 kube-scheduler-functional-728898              100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m54s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m48s
	  kubernetes-dashboard        dashboard-metrics-scraper-5d59dccf9b-hchz9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-wcf49         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (72%)  700m (35%)
	  memory             752Mi (19%)  1040Mi (27%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m48s                kube-proxy       
	  Normal  Starting                 4m1s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m1s (x8 over 4m1s)  kubelet          Node functional-728898 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m1s (x8 over 4m1s)  kubelet          Node functional-728898 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m1s (x7 over 4m1s)  kubelet          Node functional-728898 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 3m54s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m54s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m54s                kubelet          Node functional-728898 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m54s                kubelet          Node functional-728898 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m54s                kubelet          Node functional-728898 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m50s                node-controller  Node functional-728898 event: Registered Node functional-728898 in Controller
	
	
	==> dmesg <==
	[  +6.536598] kauditd_printk_skb: 9 callbacks suppressed
	[  +8.303034] systemd-fstab-generator[3534]: Ignoring "noauto" option for root device
	[ +19.279620] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.758322] systemd-fstab-generator[4757]: Ignoring "noauto" option for root device
	[  +0.154465] systemd-fstab-generator[4775]: Ignoring "noauto" option for root device
	[  +0.239609] systemd-fstab-generator[4789]: Ignoring "noauto" option for root device
	[  +0.149705] systemd-fstab-generator[4801]: Ignoring "noauto" option for root device
	[  +0.317598] systemd-fstab-generator[4829]: Ignoring "noauto" option for root device
	[Apr 7 12:26] kauditd_printk_skb: 156 callbacks suppressed
	[  +0.542217] systemd-fstab-generator[5030]: Ignoring "noauto" option for root device
	[  +2.412188] systemd-fstab-generator[5154]: Ignoring "noauto" option for root device
	[  +5.659491] kauditd_printk_skb: 84 callbacks suppressed
	[Apr 7 12:27] kauditd_printk_skb: 17 callbacks suppressed
	[Apr 7 12:30] systemd-fstab-generator[6290]: Ignoring "noauto" option for root device
	[  +7.055574] systemd-fstab-generator[6632]: Ignoring "noauto" option for root device
	[  +0.101623] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.586609] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.216184] systemd-fstab-generator[6814]: Ignoring "noauto" option for root device
	[  +6.275851] kauditd_printk_skb: 72 callbacks suppressed
	[Apr 7 12:31] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.090629] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.143480] kauditd_printk_skb: 16 callbacks suppressed
	[  +6.360557] kauditd_printk_skb: 3 callbacks suppressed
	[  +8.070659] kauditd_printk_skb: 35 callbacks suppressed
	[Apr 7 12:32] kauditd_printk_skb: 40 callbacks suppressed
	
	
	==> etcd [a5dbf84a635343ac9efc499953db9a64f8df84e22403f3a89220b3c6e7de8128] <==
	{"level":"info","ts":"2025-04-07T12:30:37.670267Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-07T12:30:37.670798Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-07T12:30:37.671373Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"49af9f8d56d5fd66","local-member-id":"cd9667466a016d70","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T12:30:37.671492Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T12:30:37.671536Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T12:30:37.671881Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-07T12:30:37.672484Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-07T12:30:37.672654Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-07T12:30:37.672684Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-07T12:30:37.675735Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.151:2379"}
	{"level":"info","ts":"2025-04-07T12:31:26.078812Z","caller":"traceutil/trace.go:171","msg":"trace[1497364407] transaction","detail":"{read_only:false; response_revision:559; number_of_response:1; }","duration":"376.791204ms","start":"2025-04-07T12:31:25.702000Z","end":"2025-04-07T12:31:26.078791Z","steps":["trace[1497364407] 'process raft request'  (duration: 376.679151ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T12:31:26.079261Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-07T12:31:25.701975Z","time spent":"376.914387ms","remote":"127.0.0.1:40850","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:558 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-04-07T12:31:26.079674Z","caller":"traceutil/trace.go:171","msg":"trace[1599381144] linearizableReadLoop","detail":"{readStateIndex:583; appliedIndex:583; }","duration":"244.467824ms","start":"2025-04-07T12:31:25.835149Z","end":"2025-04-07T12:31:26.079616Z","steps":["trace[1599381144] 'read index received'  (duration: 244.463381ms)","trace[1599381144] 'applied index is now lower than readState.Index'  (duration: 2.531µs)"],"step_count":2}
	{"level":"warn","ts":"2025-04-07T12:31:26.079776Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"244.617123ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-07T12:31:26.081228Z","caller":"traceutil/trace.go:171","msg":"trace[1595407039] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:559; }","duration":"244.68723ms","start":"2025-04-07T12:31:25.835126Z","end":"2025-04-07T12:31:26.079813Z","steps":["trace[1595407039] 'agreement among raft nodes before linearized reading'  (duration: 244.618359ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T12:31:31.503518Z","caller":"traceutil/trace.go:171","msg":"trace[1928162418] linearizableReadLoop","detail":"{readStateIndex:623; appliedIndex:622; }","duration":"222.812294ms","start":"2025-04-07T12:31:31.280687Z","end":"2025-04-07T12:31:31.503499Z","steps":["trace[1928162418] 'read index received'  (duration: 222.668549ms)","trace[1928162418] 'applied index is now lower than readState.Index'  (duration: 142.815µs)"],"step_count":2}
	{"level":"warn","ts":"2025-04-07T12:31:31.503733Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.031701ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-07T12:31:31.503800Z","caller":"traceutil/trace.go:171","msg":"trace[1484031424] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:597; }","duration":"223.131344ms","start":"2025-04-07T12:31:31.280656Z","end":"2025-04-07T12:31:31.503787Z","steps":["trace[1484031424] 'agreement among raft nodes before linearized reading'  (duration: 223.031182ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T12:31:31.503992Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.794182ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-07T12:31:31.504046Z","caller":"traceutil/trace.go:171","msg":"trace[1070217299] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:597; }","duration":"176.878442ms","start":"2025-04-07T12:31:31.327157Z","end":"2025-04-07T12:31:31.504036Z","steps":["trace[1070217299] 'agreement among raft nodes before linearized reading'  (duration: 176.725766ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T12:32:56.885407Z","caller":"traceutil/trace.go:171","msg":"trace[1165139005] transaction","detail":"{read_only:false; response_revision:756; number_of_response:1; }","duration":"238.68683ms","start":"2025-04-07T12:32:56.646704Z","end":"2025-04-07T12:32:56.885391Z","steps":["trace[1165139005] 'process raft request'  (duration: 238.599489ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T12:33:27.279247Z","caller":"traceutil/trace.go:171","msg":"trace[2030718870] linearizableReadLoop","detail":"{readStateIndex:852; appliedIndex:851; }","duration":"116.948433ms","start":"2025-04-07T12:33:27.162284Z","end":"2025-04-07T12:33:27.279233Z","steps":["trace[2030718870] 'read index received'  (duration: 116.787984ms)","trace[2030718870] 'applied index is now lower than readState.Index'  (duration: 160.05µs)"],"step_count":2}
	{"level":"warn","ts":"2025-04-07T12:33:27.279378Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.094147ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-07T12:33:27.279400Z","caller":"traceutil/trace.go:171","msg":"trace[881207067] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:801; }","duration":"117.162071ms","start":"2025-04-07T12:33:27.162232Z","end":"2025-04-07T12:33:27.279394Z","steps":["trace[881207067] 'agreement among raft nodes before linearized reading'  (duration: 117.089321ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T12:33:27.279630Z","caller":"traceutil/trace.go:171","msg":"trace[1471716281] transaction","detail":"{read_only:false; response_revision:801; number_of_response:1; }","duration":"229.823277ms","start":"2025-04-07T12:33:27.049797Z","end":"2025-04-07T12:33:27.279620Z","steps":["trace[1471716281] 'process raft request'  (duration: 229.318566ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:34:36 up 11 min,  0 users,  load average: 0.45, 0.53, 0.32
	Linux functional-728898 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1d400b079c78d32546aa75b8827e7628204648f23b4d6286f7300db0f84913f6] <==
	I0407 12:30:39.222175       1 controller.go:615] quota admission added evaluator for: namespaces
	I0407 12:30:39.301332       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0407 12:30:40.046404       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0407 12:30:40.058724       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0407 12:30:40.059000       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0407 12:30:41.015289       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0407 12:30:41.118897       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0407 12:30:41.271299       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0407 12:30:41.290830       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.151]
	I0407 12:30:41.292650       1 controller.go:615] quota admission added evaluator for: endpoints
	I0407 12:30:41.301163       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0407 12:30:42.126990       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0407 12:30:42.485542       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0407 12:30:42.514215       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0407 12:30:42.535359       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0407 12:30:47.324258       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0407 12:30:47.530193       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0407 12:31:04.195020       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.105.130.69"}
	I0407 12:31:09.589303       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.59.160"}
	I0407 12:31:12.072503       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.96.97.41"}
	I0407 12:31:21.974531       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.250.166"}
	I0407 12:31:28.109335       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.104.179.210"}
	E0407 12:31:34.020442       1 conn.go:339] Error on socket receive: read tcp 192.168.39.151:8441->192.168.39.1:48646: use of closed network connection
	I0407 12:31:35.269151       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.52.253"}
	I0407 12:31:35.348019       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.229.142"}
	
	
	==> kube-apiserver [4448b2213cb7fb1f4a035fa143c4303e3f68cedd25140b0278531edf8fa55a1f] <==
	W0407 12:30:31.350734       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.435423       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.469400       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.494589       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.511914       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.523312       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.535102       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.535337       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.569704       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.611308       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.635718       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.654664       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.664385       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.671219       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.680018       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.744506       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.749228       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.816075       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.816258       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.838879       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:32.032146       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:32.051575       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:32.056430       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:32.063222       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:32.339353       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [7486c26ef9200d679640eddc4b175bc9720bddd11b0a617684cb0f38e2f2290a] <==
	I0407 12:31:34.973772       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="9.694368ms"
	E0407 12:31:34.973965       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0407 12:31:35.008358       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="32.344593ms"
	E0407 12:31:35.008425       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0407 12:31:35.008537       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="32.445387ms"
	E0407 12:31:35.008555       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0407 12:31:35.069322       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="57.668073ms"
	I0407 12:31:35.098687       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="86.959809ms"
	I0407 12:31:35.153396       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="84.003882ms"
	I0407 12:31:35.157183       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="128.559µs"
	I0407 12:31:35.167835       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="69.078417ms"
	I0407 12:31:35.168134       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="233.503µs"
	I0407 12:31:35.210288       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="82.927µs"
	I0407 12:31:35.244402       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="59.769µs"
	I0407 12:31:43.822441       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-728898"
	I0407 12:32:16.211032       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="41.566µs"
	I0407 12:32:31.438987       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="140.138µs"
	I0407 12:32:44.914182       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-728898"
	I0407 12:32:57.448590       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="25.15825ms"
	I0407 12:32:57.449242       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="56.981µs"
	I0407 12:33:00.460846       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="17.619517ms"
	I0407 12:33:00.463099       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="60.854µs"
	I0407 12:33:15.336079       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-728898"
	I0407 12:33:43.432834       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="45.34µs"
	I0407 12:33:56.435800       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="54.877µs"
	
	
	==> kube-proxy [ad18f7874f75273858975f81e17ce0d514bc3c3c0663b789c0cf9b97d6c5db9a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0407 12:30:48.190886       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0407 12:30:48.201043       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.151"]
	E0407 12:30:48.201181       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0407 12:30:48.237798       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0407 12:30:48.237845       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0407 12:30:48.237868       1 server_linux.go:170] "Using iptables Proxier"
	I0407 12:30:48.241774       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0407 12:30:48.242120       1 server.go:497] "Version info" version="v1.32.2"
	I0407 12:30:48.242167       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0407 12:30:48.243461       1 config.go:199] "Starting service config controller"
	I0407 12:30:48.243522       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0407 12:30:48.243554       1 config.go:105] "Starting endpoint slice config controller"
	I0407 12:30:48.243570       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0407 12:30:48.244139       1 config.go:329] "Starting node config controller"
	I0407 12:30:48.244179       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0407 12:30:48.343736       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0407 12:30:48.343791       1 shared_informer.go:320] Caches are synced for service config
	I0407 12:30:48.345592       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c3121037762e1f6146a3c087b19535c589d603013d08a88f808c08e4ce645b11] <==
	W0407 12:30:40.052322       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0407 12:30:40.052411       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:30:40.070143       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0407 12:30:40.070298       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:30:40.242271       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0407 12:30:40.242309       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0407 12:30:40.324029       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0407 12:30:40.325773       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0407 12:30:40.342689       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0407 12:30:40.343174       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 12:30:40.480619       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0407 12:30:40.480659       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:30:40.489790       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0407 12:30:40.489889       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:30:40.499982       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0407 12:30:40.500033       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 12:30:40.562101       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0407 12:30:40.562199       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0407 12:30:40.642746       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0407 12:30:40.642803       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 12:30:40.676807       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0407 12:30:40.677162       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:30:40.698424       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0407 12:30:40.698589       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0407 12:30:43.841714       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 07 12:33:31 functional-728898 kubelet[6639]: E0407 12:33:31.337277    6639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-lwjg6" podUID="469f1d47-d0fc-45ea-834b-f92ab80cba97"
	Apr 07 12:33:32 functional-728898 kubelet[6639]: E0407 12:33:32.608770    6639 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744029212608446340,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:294016,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 12:33:32 functional-728898 kubelet[6639]: E0407 12:33:32.608818    6639 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744029212608446340,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:294016,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 12:33:42 functional-728898 kubelet[6639]: E0407 12:33:42.449569    6639 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 07 12:33:42 functional-728898 kubelet[6639]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 07 12:33:42 functional-728898 kubelet[6639]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 07 12:33:42 functional-728898 kubelet[6639]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 07 12:33:42 functional-728898 kubelet[6639]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 07 12:33:42 functional-728898 kubelet[6639]: E0407 12:33:42.610712    6639 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744029222610304115,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:294016,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 12:33:42 functional-728898 kubelet[6639]: E0407 12:33:42.610783    6639 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744029222610304115,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:294016,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 12:33:43 functional-728898 kubelet[6639]: E0407 12:33:43.417835    6639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-lwjg6" podUID="469f1d47-d0fc-45ea-834b-f92ab80cba97"
	Apr 07 12:33:52 functional-728898 kubelet[6639]: E0407 12:33:52.614149    6639 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744029232612338393,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:294016,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 12:33:52 functional-728898 kubelet[6639]: E0407 12:33:52.614461    6639 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744029232612338393,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:294016,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 12:34:02 functional-728898 kubelet[6639]: E0407 12:34:02.617366    6639 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744029242616476886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:294016,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 12:34:02 functional-728898 kubelet[6639]: E0407 12:34:02.617418    6639 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744029242616476886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:294016,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 12:34:12 functional-728898 kubelet[6639]: E0407 12:34:12.620321    6639 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744029252619708755,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:294016,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 12:34:12 functional-728898 kubelet[6639]: E0407 12:34:12.620398    6639 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744029252619708755,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:294016,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 12:34:22 functional-728898 kubelet[6639]: E0407 12:34:22.625447    6639 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744029262624822913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:294016,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 12:34:22 functional-728898 kubelet[6639]: E0407 12:34:22.625978    6639 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744029262624822913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:294016,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 12:34:32 functional-728898 kubelet[6639]: E0407 12:34:32.628660    6639 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744029272628205343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:294016,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 12:34:32 functional-728898 kubelet[6639]: E0407 12:34:32.628702    6639 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744029272628205343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:294016,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 12:34:33 functional-728898 kubelet[6639]: E0407 12:34:33.640531    6639 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from image index: reading manifest sha256:54809b2f36d0ff38e8e5362b0239779e4b75c2f19ad70ef047ed050f01506bb4 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Apr 07 12:34:33 functional-728898 kubelet[6639]: E0407 12:34:33.640851    6639 kuberuntime_image.go:55] "Failed to pull image" err="fetching target platform image selected from image index: reading manifest sha256:54809b2f36d0ff38e8e5362b0239779e4b75c2f19ad70ef047ed050f01506bb4 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Apr 07 12:34:33 functional-728898 kubelet[6639]: E0407 12:34:33.641183    6639 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:myfrontend,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6cmgj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},Re
startPolicy:nil,} start failed in pod sp-pod_default(59ccb8e9-51d3-4535-9029-fdfba6756cdc): ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:54809b2f36d0ff38e8e5362b0239779e4b75c2f19ad70ef047ed050f01506bb4 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Apr 07 12:34:33 functional-728898 kubelet[6639]: E0407 12:34:33.643258    6639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"fetching target platform image selected from image index: reading manifest sha256:54809b2f36d0ff38e8e5362b0239779e4b75c2f19ad70ef047ed050f01506bb4 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="59ccb8e9-51d3-4535-9029-fdfba6756cdc"
	
	
	==> kubernetes-dashboard [79e6ae6b95171565d73c07c01490487ea7fee11ba11f7e6a30a204b86335a407] <==
	2025/04/07 12:32:57 Using namespace: kubernetes-dashboard
	2025/04/07 12:32:57 Using in-cluster config to connect to apiserver
	2025/04/07 12:32:57 Using secret token for csrf signing
	2025/04/07 12:32:57 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/04/07 12:32:57 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/04/07 12:32:57 Successful initial request to the apiserver, version: v1.32.2
	2025/04/07 12:32:57 Generating JWE encryption key
	2025/04/07 12:32:57 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/04/07 12:32:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/04/07 12:32:57 Initializing JWE encryption key from synchronized object
	2025/04/07 12:32:57 Creating in-cluster Sidecar client
	2025/04/07 12:32:57 Serving insecurely on HTTP port: 9090
	2025/04/07 12:32:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/04/07 12:33:27 Successful request to sidecar
	2025/04/07 12:32:57 Starting overwatch
	
	
	==> storage-provisioner [9930e272f17b71e45f60674688fe7b134094010f20529b2bbc81518611620890] <==
	I0407 12:30:49.340577       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0407 12:30:49.358856       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0407 12:30:49.359013       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0407 12:30:49.375283       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0407 12:30:49.375512       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-728898_d05a709f-ce9c-4fe9-847c-85935f5dea43!
	I0407 12:30:49.376838       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c3fea848-7c7f-4876-8847-a2c18a541bf2", APIVersion:"v1", ResourceVersion:"395", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-728898_d05a709f-ce9c-4fe9-847c-85935f5dea43 became leader
	I0407 12:30:49.476087       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-728898_d05a709f-ce9c-4fe9-847c-85935f5dea43!
	I0407 12:31:15.591256       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0407 12:31:15.592280       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"f2b97eeb-298e-4a42-bbfa-299ca292c752", APIVersion:"v1", ResourceVersion:"508", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0407 12:31:15.591545       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    35e91b18-6ea1-43db-9800-acb83bd0568f 378 0 2025-04-07 12:30:48 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2025-04-07 12:30:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-f2b97eeb-298e-4a42-bbfa-299ca292c752 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  f2b97eeb-298e-4a42-bbfa-299ca292c752 508 0 2025-04-07 12:31:15 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2025-04-07 12:31:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2025-04-07 12:31:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0407 12:31:15.593022       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-f2b97eeb-298e-4a42-bbfa-299ca292c752" provisioned
	I0407 12:31:15.593111       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0407 12:31:15.593136       1 volume_store.go:212] Trying to save persistentvolume "pvc-f2b97eeb-298e-4a42-bbfa-299ca292c752"
	I0407 12:31:15.608870       1 volume_store.go:219] persistentvolume "pvc-f2b97eeb-298e-4a42-bbfa-299ca292c752" saved
	I0407 12:31:15.611834       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"f2b97eeb-298e-4a42-bbfa-299ca292c752", APIVersion:"v1", ResourceVersion:"508", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-f2b97eeb-298e-4a42-bbfa-299ca292c752
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-728898 -n functional-728898
helpers_test.go:261: (dbg) Run:  kubectl --context functional-728898 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-58ccfd96bb-lwjg6 sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-728898 describe pod busybox-mount mysql-58ccfd96bb-lwjg6 sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-728898 describe pod busybox-mount mysql-58ccfd96bb-lwjg6 sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-728898/192.168.39.151
	Start Time:       Mon, 07 Apr 2025 12:31:29 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.14
	IPs:
	  IP:  10.244.0.14
	Containers:
	  mount-munger:
	    Container ID:  cri-o://3ac976c79c64830e17267759075a44ad7a7741506cb370dd4507b325c9883b76
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 07 Apr 2025 12:32:19 +0000
	      Finished:     Mon, 07 Apr 2025 12:32:19 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8pmkp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-8pmkp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  3m8s   default-scheduler  Successfully assigned default/busybox-mount to functional-728898
	  Normal  Pulling    3m8s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     2m18s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 4.101s (49.761s including waiting). Image size: 4631262 bytes.
	  Normal  Created    2m18s  kubelet            Created container: mount-munger
	  Normal  Started    2m18s  kubelet            Started container mount-munger
	
	
	Name:             mysql-58ccfd96bb-lwjg6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-728898/192.168.39.151
	Start Time:       Mon, 07 Apr 2025 12:31:28 +0000
	Labels:           app=mysql
	                  pod-template-hash=58ccfd96bb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.13
	IPs:
	  IP:           10.244.0.13
	Controlled By:  ReplicaSet/mysql-58ccfd96bb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5gszn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-5gszn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m9s                 default-scheduler  Successfully assigned default/mysql-58ccfd96bb-lwjg6 to functional-728898
	  Warning  Failed     2m22s                kubelet            Failed to pull image "docker.io/mysql:5.7": initializing source docker://mysql:5.7: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     66s (x2 over 2m22s)  kubelet            Error: ErrImagePull
	  Warning  Failed     66s                  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    54s (x2 over 2m21s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     54s (x2 over 2m21s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    41s (x3 over 3m9s)   kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-728898/192.168.39.151
	Start Time:       Mon, 07 Apr 2025 12:31:35 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.15
	IPs:
	  IP:  10.244.0.15
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6cmgj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-6cmgj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  3m2s                default-scheduler  Successfully assigned default/sp-pod to functional-728898
	  Warning  Failed     106s                kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    106s                kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     106s                kubelet            Error: ImagePullBackOff
	  Normal   Pulling    95s (x2 over 3m2s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     4s (x2 over 106s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4s                  kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:54809b2f36d0ff38e8e5362b0239779e4b75c2f19ad70ef047ed050f01506bb4 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
E0407 12:37:04.915551 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:38:27.998345 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (208.49s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-728898 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-lwjg6" [469f1d47-d0fc-45ea-834b-f92ab80cba97] Pending
helpers_test.go:344: "mysql-58ccfd96bb-lwjg6" [469f1d47-d0fc-45ea-834b-f92ab80cba97] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:329: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1816: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1816: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-728898 -n functional-728898
functional_test.go:1816: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-04-07 12:41:28.459983456 +0000 UTC m=+1684.224895128
functional_test.go:1816: (dbg) Run:  kubectl --context functional-728898 describe po mysql-58ccfd96bb-lwjg6 -n default
functional_test.go:1816: (dbg) kubectl --context functional-728898 describe po mysql-58ccfd96bb-lwjg6 -n default:
Name:             mysql-58ccfd96bb-lwjg6
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-728898/192.168.39.151
Start Time:       Mon, 07 Apr 2025 12:31:28 +0000
Labels:           app=mysql
pod-template-hash=58ccfd96bb
Annotations:      <none>
Status:           Pending
IP:               10.244.0.13
IPs:
IP:           10.244.0.13
Controlled By:  ReplicaSet/mysql-58ccfd96bb
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5gszn (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-5gszn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-58ccfd96bb-lwjg6 to functional-728898
Warning  Failed     9m13s                  kubelet            Failed to pull image "docker.io/mysql:5.7": initializing source docker://mysql:5.7: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     6m23s (x2 over 7m57s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    3m23s (x5 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     2m52s (x5 over 9m13s)  kubelet            Error: ErrImagePull
Warning  Failed     2m52s (x2 over 4m57s)  kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     97s (x16 over 9m12s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    28s (x21 over 9m12s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
functional_test.go:1816: (dbg) Run:  kubectl --context functional-728898 logs mysql-58ccfd96bb-lwjg6 -n default
functional_test.go:1816: (dbg) Non-zero exit: kubectl --context functional-728898 logs mysql-58ccfd96bb-lwjg6 -n default: exit status 1 (81.751504ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-58ccfd96bb-lwjg6" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1816: kubectl --context functional-728898 logs mysql-58ccfd96bb-lwjg6 -n default: exit status 1
functional_test.go:1818: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-728898 -n functional-728898
helpers_test.go:244: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-728898 logs -n 25: (1.310131765s)
helpers_test.go:252: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-728898 ssh stat                                               | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC | 07 Apr 25 12:32 UTC |
	|                | /mount-9p/created-by-pod                                                 |                   |         |         |                     |                     |
	| ssh            | functional-728898 ssh sudo                                               | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC | 07 Apr 25 12:32 UTC |
	|                | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| mount          | -p functional-728898                                                     | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdspecific-port2444163506/001:/mount-9p |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1 --port 46464                                      |                   |         |         |                     |                     |
	| ssh            | functional-728898 ssh findmnt                                            | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC |                     |
	|                | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh            | functional-728898 ssh findmnt                                            | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC | 07 Apr 25 12:32 UTC |
	|                | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh            | functional-728898 ssh -- ls                                              | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC | 07 Apr 25 12:32 UTC |
	|                | -la /mount-9p                                                            |                   |         |         |                     |                     |
	| ssh            | functional-728898 ssh sudo                                               | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC |                     |
	|                | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| ssh            | functional-728898 ssh findmnt                                            | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC |                     |
	|                | -T /mount1                                                               |                   |         |         |                     |                     |
	| mount          | -p functional-728898                                                     | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2559023673/001:/mount2   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-728898                                                     | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2559023673/001:/mount1   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-728898                                                     | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2559023673/001:/mount3   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh            | functional-728898 ssh findmnt                                            | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC | 07 Apr 25 12:32 UTC |
	|                | -T /mount1                                                               |                   |         |         |                     |                     |
	| ssh            | functional-728898 ssh findmnt                                            | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC | 07 Apr 25 12:32 UTC |
	|                | -T /mount2                                                               |                   |         |         |                     |                     |
	| ssh            | functional-728898 ssh findmnt                                            | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC | 07 Apr 25 12:32 UTC |
	|                | -T /mount3                                                               |                   |         |         |                     |                     |
	| mount          | -p functional-728898                                                     | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC |                     |
	|                | --kill=true                                                              |                   |         |         |                     |                     |
	| image          | functional-728898                                                        | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC | 07 Apr 25 12:32 UTC |
	|                | image ls --format short                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| image          | functional-728898                                                        | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC | 07 Apr 25 12:32 UTC |
	|                | image ls --format yaml                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| ssh            | functional-728898 ssh pgrep                                              | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC |                     |
	|                | buildkitd                                                                |                   |         |         |                     |                     |
	| image          | functional-728898 image build -t                                         | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC | 07 Apr 25 12:32 UTC |
	|                | localhost/my-image:functional-728898                                     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                         |                   |         |         |                     |                     |
	| image          | functional-728898 image ls                                               | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC | 07 Apr 25 12:32 UTC |
	| image          | functional-728898                                                        | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC | 07 Apr 25 12:32 UTC |
	|                | image ls --format json                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| image          | functional-728898                                                        | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC | 07 Apr 25 12:32 UTC |
	|                | image ls --format table                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| update-context | functional-728898                                                        | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC | 07 Apr 25 12:32 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| update-context | functional-728898                                                        | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC | 07 Apr 25 12:32 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| update-context | functional-728898                                                        | functional-728898 | jenkins | v1.35.0 | 07 Apr 25 12:32 UTC | 07 Apr 25 12:32 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 12:31:27
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 12:31:27.823336 1179726 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:31:27.823511 1179726 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:31:27.823530 1179726 out.go:358] Setting ErrFile to fd 2...
	I0407 12:31:27.823537 1179726 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:31:27.823874 1179726 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1162386/.minikube/bin
	I0407 12:31:27.824512 1179726 out.go:352] Setting JSON to false
	I0407 12:31:27.825851 1179726 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":15232,"bootTime":1744013856,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 12:31:27.825986 1179726 start.go:139] virtualization: kvm guest
	I0407 12:31:27.828807 1179726 out.go:177] * [functional-728898] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0407 12:31:27.831661 1179726 notify.go:220] Checking for updates...
	I0407 12:31:27.831689 1179726 out.go:177]   - MINIKUBE_LOCATION=20602
	I0407 12:31:27.835776 1179726 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:31:27.839208 1179726 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20602-1162386/kubeconfig
	I0407 12:31:27.841222 1179726 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1162386/.minikube
	I0407 12:31:27.843251 1179726 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 12:31:27.845401 1179726 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 12:31:27.847889 1179726 config.go:182] Loaded profile config "functional-728898": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 12:31:27.848422 1179726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:31:27.848495 1179726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:31:27.872082 1179726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43461
	I0407 12:31:27.872734 1179726 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:31:27.873397 1179726 main.go:141] libmachine: Using API Version  1
	I0407 12:31:27.873425 1179726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:31:27.873935 1179726 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:31:27.874322 1179726 main.go:141] libmachine: (functional-728898) Calling .DriverName
	I0407 12:31:27.874754 1179726 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:31:27.875323 1179726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:31:27.875388 1179726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:31:27.897668 1179726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35575
	I0407 12:31:27.898307 1179726 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:31:27.899292 1179726 main.go:141] libmachine: Using API Version  1
	I0407 12:31:27.899465 1179726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:31:27.900064 1179726 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:31:27.900462 1179726 main.go:141] libmachine: (functional-728898) Calling .DriverName
	I0407 12:31:27.944079 1179726 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0407 12:31:27.946374 1179726 start.go:297] selected driver: kvm2
	I0407 12:31:27.946409 1179726 start.go:901] validating driver "kvm2" against &{Name:functional-728898 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-728898 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.151 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:31:27.946543 1179726 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 12:31:27.949744 1179726 out.go:201] 
	W0407 12:31:27.952054 1179726 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0407 12:31:27.953828 1179726 out.go:201] 
	
	
	==> CRI-O <==
	Apr 07 12:41:29 functional-728898 crio[4913]: time="2025-04-07 12:41:29.330319340Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744029689330290273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:294016,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1f47eff4-349c-47d8-95e5-a33cb764d2b9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 12:41:29 functional-728898 crio[4913]: time="2025-04-07 12:41:29.331091129Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=db7523e1-64d4-4301-9d18-8df714838c69 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 12:41:29 functional-728898 crio[4913]: time="2025-04-07 12:41:29.331162273Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=db7523e1-64d4-4301-9d18-8df714838c69 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 12:41:29 functional-728898 crio[4913]: time="2025-04-07 12:41:29.331466988Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61cfcaeb35143eb0de95458d3dc252bbfb821bc28f3272f149231a0ee0197135,PodSandboxId:8308802280b4a5804fc9617d78904a82c0e876eb9592537b7dcc7b749f9f7b3f,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1744029179956536158,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-hchz9,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4a9d61d0-b68a-4d16-863d-ca4fe7e10fa6,},Annotations:map[string]string{io.kub
ernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79e6ae6b95171565d73c07c01490487ea7fee11ba11f7e6a30a204b86335a407,PodSandboxId:0433f6fe349c330301c1bcfbecdbccfac0a3f532893bcdadc5bfd9526ec100d2,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1744029177103675333,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-wcf49,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: d7735f03-dec7-4187-bb41-90c619473df0,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac976c79c64830e17267759075a44ad7a7741506cb370dd4507b325c9883b76,PodSandboxId:f4c1f4934a89650182c3ee05d95be1c6928f58fa9bca64f02bc33930b7548f8c,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1744029139668414225,Labels:map[string]string{io.
kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ce03bda2-e049-4c16-bc7f-d21929aa75ce,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45c2dffaca28cb25ab301b0b159e9c767eb22331ef283bf92cdae10b61c8817c,PodSandboxId:7e2f3b699d4fb9be005486c9871944617cd10bc523ba3434f68dfa4230035c6c,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1744029086551862239,Labels:map[string]string{io.kubernetes.container.name: echoserver,
io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-xlzk8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 238652ef-7f48-42be-ab1b-58c79e679abd,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ce4150e5ca64bb249585aa874ac2c0b3b5124c0074c0a1af7fde49ba07e1b1f,PodSandboxId:216bf89ff12fac735b99994741760aef9fc0b788d353079aae87be9724ce26b3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744029078887662024,Labels:map[string]string{io.kubernetes.container.name: ngi
nx,io.kubernetes.pod.name: nginx-svc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a5f880a-151c-4e01-a038-26cdad8a4086,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7452ca27af16f51d13a5bf8e5a81261c8c290c5116c5e3ffd0be0e33e6f32287,PodSandboxId:1e6bf2f675ca0671acae94bcb950f153d6d57fec44aa605a312bd2739e52e6ce,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1744029074691418
251,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-hq2l4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9cd43b15-5de0-4b18-93e0-1a20605b3d3b,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d56da9bec71eb1199a87233f6976d8c3f87e73cdc0e336936406ffa0bc414016,PodSandboxId:250183d67c0abdfbbca1e7baf050e092bd026133834d085970941362166396da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744029049854504044,Labels:map[string]string{i
o.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-hlndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7057602-1190-4a5f-90ea-148e2d5babf4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c6013453428c3c541fcfd3498084ea17cff6776f774569e24a2a5d87d36ae99,PodSandboxId:f04c38eadb19133055219ae51e4e174500343871db2f4dfc1bb20ada640e2053,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744029049401719544,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-52vhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 752fe55a-003a-40fa-a1a6-dd762db82e0c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9930e272f17b71e45f60674688fe7b134094010f20529b2bbc81518611620890,PodSandboxId:cde9f0abcc1c797e36a430a55d1e94b0ef37b148
9754e97a4bf96d406dc5d4a5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744029049217840356,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16ae952f-9e64-4b01-ad75-b556b112cb03,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad18f7874f75273858975f81e17ce0d514bc3c3c0663b789c0cf9b97d6c5db9a,PodSandboxId:054db3ce1b0abc5de544916f4dac644f5cff1656e8a24d6a9d2f
adad44a09d61,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744029047886556263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-48r4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04060509-a7f2-4be6-b807-c709c6c1e3eb,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d400b079c78d32546aa75b8827e7628204648f23b4d6286f7300db0f84913f6,PodSandboxId:b034bf0e0dd3013ae0b2011c772deded3e4ce2a27790d0c4c3a6e6d1cbd44c99,Metadata:&ContainerMe
tadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744029036307272948,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5068eeb41b70734dc3c2019b8280bc0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3121037762e1f6146a3c087b19535c589d603013d08a88f808c08e4ce645b11,PodSandboxId:257cef0f19a3ed52979effb0faa63323b90ae964f588e4b321504fcfc77acb09,Metadata:&ContainerMetadata{Name:k
ube-scheduler,Attempt:3,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744029036288817771,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65277f868c895be86e767f8d62913710,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5dbf84a635343ac9efc499953db9a64f8df84e22403f3a89220b3c6e7de8128,PodSandboxId:ec3d7895c8a2a83d7dbe96b7dfaa1ae6a8f299a9e4094bec221d230cfd077793,Metadata:&ContainerMetadata{Name:etcd,Attempt:3
,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744029036252669290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9baea30e9eb118c89ddd369d5e0d1bd6,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7486c26ef9200d679640eddc4b175bc9720bddd11b0a617684cb0f38e2f2290a,PodSandboxId:d836f50a336ae77a193527ba127a02d01931d05e46193846e9b9dfdc35d5f99e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Imag
e:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744029036246488459,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d592d768ff54d22d51886bcc5507a7a8,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4448b2213cb7fb1f4a035fa143c4303e3f68cedd25140b0278531edf8fa55a1f,PodSandboxId:8c1c591e2b17c9eba2bf79646fd6152edc2905e781fd186bc7dc497ebe082208,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec
{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1744028774875808632,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5068eeb41b70734dc3c2019b8280bc0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=db7523e1-64d4-4301-9d18-8df714838c69 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 12:41:29 functional-728898 crio[4913]: time="2025-04-07 12:41:29.370040477Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7a2a1726-880a-455f-8c0e-b55db9e973c9 name=/runtime.v1.RuntimeService/Version
	Apr 07 12:41:29 functional-728898 crio[4913]: time="2025-04-07 12:41:29.370132350Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7a2a1726-880a-455f-8c0e-b55db9e973c9 name=/runtime.v1.RuntimeService/Version
	Apr 07 12:41:29 functional-728898 crio[4913]: time="2025-04-07 12:41:29.371546851Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c4d90e3a-06ac-4f68-abab-94f6eba31f4b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 12:41:29 functional-728898 crio[4913]: time="2025-04-07 12:41:29.372309308Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744029689372283635,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:294016,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c4d90e3a-06ac-4f68-abab-94f6eba31f4b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 12:41:29 functional-728898 crio[4913]: time="2025-04-07 12:41:29.373097419Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ed2b8e21-9c14-4b9a-bf89-d028a8563216 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 12:41:29 functional-728898 crio[4913]: time="2025-04-07 12:41:29.373298549Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ed2b8e21-9c14-4b9a-bf89-d028a8563216 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 12:41:29 functional-728898 crio[4913]: time="2025-04-07 12:41:29.373610614Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61cfcaeb35143eb0de95458d3dc252bbfb821bc28f3272f149231a0ee0197135,PodSandboxId:8308802280b4a5804fc9617d78904a82c0e876eb9592537b7dcc7b749f9f7b3f,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1744029179956536158,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-hchz9,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4a9d61d0-b68a-4d16-863d-ca4fe7e10fa6,},Annotations:map[string]string{io.kub
ernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79e6ae6b95171565d73c07c01490487ea7fee11ba11f7e6a30a204b86335a407,PodSandboxId:0433f6fe349c330301c1bcfbecdbccfac0a3f532893bcdadc5bfd9526ec100d2,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1744029177103675333,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-wcf49,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: d7735f03-dec7-4187-bb41-90c619473df0,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac976c79c64830e17267759075a44ad7a7741506cb370dd4507b325c9883b76,PodSandboxId:f4c1f4934a89650182c3ee05d95be1c6928f58fa9bca64f02bc33930b7548f8c,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1744029139668414225,Labels:map[string]string{io.
kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ce03bda2-e049-4c16-bc7f-d21929aa75ce,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45c2dffaca28cb25ab301b0b159e9c767eb22331ef283bf92cdae10b61c8817c,PodSandboxId:7e2f3b699d4fb9be005486c9871944617cd10bc523ba3434f68dfa4230035c6c,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1744029086551862239,Labels:map[string]string{io.kubernetes.container.name: echoserver,
io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-xlzk8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 238652ef-7f48-42be-ab1b-58c79e679abd,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ce4150e5ca64bb249585aa874ac2c0b3b5124c0074c0a1af7fde49ba07e1b1f,PodSandboxId:216bf89ff12fac735b99994741760aef9fc0b788d353079aae87be9724ce26b3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744029078887662024,Labels:map[string]string{io.kubernetes.container.name: ngi
nx,io.kubernetes.pod.name: nginx-svc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a5f880a-151c-4e01-a038-26cdad8a4086,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7452ca27af16f51d13a5bf8e5a81261c8c290c5116c5e3ffd0be0e33e6f32287,PodSandboxId:1e6bf2f675ca0671acae94bcb950f153d6d57fec44aa605a312bd2739e52e6ce,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1744029074691418
251,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-hq2l4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9cd43b15-5de0-4b18-93e0-1a20605b3d3b,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d56da9bec71eb1199a87233f6976d8c3f87e73cdc0e336936406ffa0bc414016,PodSandboxId:250183d67c0abdfbbca1e7baf050e092bd026133834d085970941362166396da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744029049854504044,Labels:map[string]string{i
o.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-hlndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7057602-1190-4a5f-90ea-148e2d5babf4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c6013453428c3c541fcfd3498084ea17cff6776f774569e24a2a5d87d36ae99,PodSandboxId:f04c38eadb19133055219ae51e4e174500343871db2f4dfc1bb20ada640e2053,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744029049401719544,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-52vhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 752fe55a-003a-40fa-a1a6-dd762db82e0c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9930e272f17b71e45f60674688fe7b134094010f20529b2bbc81518611620890,PodSandboxId:cde9f0abcc1c797e36a430a55d1e94b0ef37b148
9754e97a4bf96d406dc5d4a5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744029049217840356,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16ae952f-9e64-4b01-ad75-b556b112cb03,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad18f7874f75273858975f81e17ce0d514bc3c3c0663b789c0cf9b97d6c5db9a,PodSandboxId:054db3ce1b0abc5de544916f4dac644f5cff1656e8a24d6a9d2f
adad44a09d61,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744029047886556263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-48r4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04060509-a7f2-4be6-b807-c709c6c1e3eb,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d400b079c78d32546aa75b8827e7628204648f23b4d6286f7300db0f84913f6,PodSandboxId:b034bf0e0dd3013ae0b2011c772deded3e4ce2a27790d0c4c3a6e6d1cbd44c99,Metadata:&ContainerMe
tadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744029036307272948,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5068eeb41b70734dc3c2019b8280bc0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3121037762e1f6146a3c087b19535c589d603013d08a88f808c08e4ce645b11,PodSandboxId:257cef0f19a3ed52979effb0faa63323b90ae964f588e4b321504fcfc77acb09,Metadata:&ContainerMetadata{Name:k
ube-scheduler,Attempt:3,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744029036288817771,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65277f868c895be86e767f8d62913710,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5dbf84a635343ac9efc499953db9a64f8df84e22403f3a89220b3c6e7de8128,PodSandboxId:ec3d7895c8a2a83d7dbe96b7dfaa1ae6a8f299a9e4094bec221d230cfd077793,Metadata:&ContainerMetadata{Name:etcd,Attempt:3
,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744029036252669290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9baea30e9eb118c89ddd369d5e0d1bd6,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7486c26ef9200d679640eddc4b175bc9720bddd11b0a617684cb0f38e2f2290a,PodSandboxId:d836f50a336ae77a193527ba127a02d01931d05e46193846e9b9dfdc35d5f99e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Imag
e:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744029036246488459,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d592d768ff54d22d51886bcc5507a7a8,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4448b2213cb7fb1f4a035fa143c4303e3f68cedd25140b0278531edf8fa55a1f,PodSandboxId:8c1c591e2b17c9eba2bf79646fd6152edc2905e781fd186bc7dc497ebe082208,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec
{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1744028774875808632,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5068eeb41b70734dc3c2019b8280bc0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ed2b8e21-9c14-4b9a-bf89-d028a8563216 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 12:41:29 functional-728898 crio[4913]: time="2025-04-07 12:41:29.406831591Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e99b1f17-21ea-40ed-b5b6-3b0647b4c602 name=/runtime.v1.RuntimeService/Version
	Apr 07 12:41:29 functional-728898 crio[4913]: time="2025-04-07 12:41:29.407027636Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e99b1f17-21ea-40ed-b5b6-3b0647b4c602 name=/runtime.v1.RuntimeService/Version
	Apr 07 12:41:29 functional-728898 crio[4913]: time="2025-04-07 12:41:29.408095432Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3f9e36b2-6280-42b3-b546-1dc3c293110b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 12:41:29 functional-728898 crio[4913]: time="2025-04-07 12:41:29.408817280Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744029689408793053,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:294016,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3f9e36b2-6280-42b3-b546-1dc3c293110b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 12:41:29 functional-728898 crio[4913]: time="2025-04-07 12:41:29.409621898Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=604222b7-a2f1-4772-8efb-41a193cbf441 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 12:41:29 functional-728898 crio[4913]: time="2025-04-07 12:41:29.409685095Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=604222b7-a2f1-4772-8efb-41a193cbf441 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 12:41:29 functional-728898 crio[4913]: time="2025-04-07 12:41:29.410053520Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61cfcaeb35143eb0de95458d3dc252bbfb821bc28f3272f149231a0ee0197135,PodSandboxId:8308802280b4a5804fc9617d78904a82c0e876eb9592537b7dcc7b749f9f7b3f,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1744029179956536158,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-hchz9,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4a9d61d0-b68a-4d16-863d-ca4fe7e10fa6,},Annotations:map[string]string{io.kub
ernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79e6ae6b95171565d73c07c01490487ea7fee11ba11f7e6a30a204b86335a407,PodSandboxId:0433f6fe349c330301c1bcfbecdbccfac0a3f532893bcdadc5bfd9526ec100d2,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1744029177103675333,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-wcf49,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: d7735f03-dec7-4187-bb41-90c619473df0,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac976c79c64830e17267759075a44ad7a7741506cb370dd4507b325c9883b76,PodSandboxId:f4c1f4934a89650182c3ee05d95be1c6928f58fa9bca64f02bc33930b7548f8c,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1744029139668414225,Labels:map[string]string{io.
kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ce03bda2-e049-4c16-bc7f-d21929aa75ce,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45c2dffaca28cb25ab301b0b159e9c767eb22331ef283bf92cdae10b61c8817c,PodSandboxId:7e2f3b699d4fb9be005486c9871944617cd10bc523ba3434f68dfa4230035c6c,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1744029086551862239,Labels:map[string]string{io.kubernetes.container.name: echoserver,
io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-xlzk8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 238652ef-7f48-42be-ab1b-58c79e679abd,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ce4150e5ca64bb249585aa874ac2c0b3b5124c0074c0a1af7fde49ba07e1b1f,PodSandboxId:216bf89ff12fac735b99994741760aef9fc0b788d353079aae87be9724ce26b3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744029078887662024,Labels:map[string]string{io.kubernetes.container.name: ngi
nx,io.kubernetes.pod.name: nginx-svc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a5f880a-151c-4e01-a038-26cdad8a4086,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7452ca27af16f51d13a5bf8e5a81261c8c290c5116c5e3ffd0be0e33e6f32287,PodSandboxId:1e6bf2f675ca0671acae94bcb950f153d6d57fec44aa605a312bd2739e52e6ce,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1744029074691418
251,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-hq2l4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9cd43b15-5de0-4b18-93e0-1a20605b3d3b,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d56da9bec71eb1199a87233f6976d8c3f87e73cdc0e336936406ffa0bc414016,PodSandboxId:250183d67c0abdfbbca1e7baf050e092bd026133834d085970941362166396da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744029049854504044,Labels:map[string]string{i
o.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-hlndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7057602-1190-4a5f-90ea-148e2d5babf4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c6013453428c3c541fcfd3498084ea17cff6776f774569e24a2a5d87d36ae99,PodSandboxId:f04c38eadb19133055219ae51e4e174500343871db2f4dfc1bb20ada640e2053,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744029049401719544,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-52vhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 752fe55a-003a-40fa-a1a6-dd762db82e0c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9930e272f17b71e45f60674688fe7b134094010f20529b2bbc81518611620890,PodSandboxId:cde9f0abcc1c797e36a430a55d1e94b0ef37b148
9754e97a4bf96d406dc5d4a5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744029049217840356,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16ae952f-9e64-4b01-ad75-b556b112cb03,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad18f7874f75273858975f81e17ce0d514bc3c3c0663b789c0cf9b97d6c5db9a,PodSandboxId:054db3ce1b0abc5de544916f4dac644f5cff1656e8a24d6a9d2f
adad44a09d61,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744029047886556263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-48r4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04060509-a7f2-4be6-b807-c709c6c1e3eb,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d400b079c78d32546aa75b8827e7628204648f23b4d6286f7300db0f84913f6,PodSandboxId:b034bf0e0dd3013ae0b2011c772deded3e4ce2a27790d0c4c3a6e6d1cbd44c99,Metadata:&ContainerMe
tadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744029036307272948,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5068eeb41b70734dc3c2019b8280bc0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3121037762e1f6146a3c087b19535c589d603013d08a88f808c08e4ce645b11,PodSandboxId:257cef0f19a3ed52979effb0faa63323b90ae964f588e4b321504fcfc77acb09,Metadata:&ContainerMetadata{Name:k
ube-scheduler,Attempt:3,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744029036288817771,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65277f868c895be86e767f8d62913710,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5dbf84a635343ac9efc499953db9a64f8df84e22403f3a89220b3c6e7de8128,PodSandboxId:ec3d7895c8a2a83d7dbe96b7dfaa1ae6a8f299a9e4094bec221d230cfd077793,Metadata:&ContainerMetadata{Name:etcd,Attempt:3
,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744029036252669290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9baea30e9eb118c89ddd369d5e0d1bd6,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7486c26ef9200d679640eddc4b175bc9720bddd11b0a617684cb0f38e2f2290a,PodSandboxId:d836f50a336ae77a193527ba127a02d01931d05e46193846e9b9dfdc35d5f99e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Imag
e:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744029036246488459,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d592d768ff54d22d51886bcc5507a7a8,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4448b2213cb7fb1f4a035fa143c4303e3f68cedd25140b0278531edf8fa55a1f,PodSandboxId:8c1c591e2b17c9eba2bf79646fd6152edc2905e781fd186bc7dc497ebe082208,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec
{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1744028774875808632,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5068eeb41b70734dc3c2019b8280bc0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=604222b7-a2f1-4772-8efb-41a193cbf441 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 12:41:29 functional-728898 crio[4913]: time="2025-04-07 12:41:29.451505403Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=88fe3fdb-db74-4489-bb4d-6a7ac6edff58 name=/runtime.v1.RuntimeService/Version
	Apr 07 12:41:29 functional-728898 crio[4913]: time="2025-04-07 12:41:29.451579524Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=88fe3fdb-db74-4489-bb4d-6a7ac6edff58 name=/runtime.v1.RuntimeService/Version
	Apr 07 12:41:29 functional-728898 crio[4913]: time="2025-04-07 12:41:29.453187868Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2f61b69e-741c-4c4a-a7ed-81b2d7536139 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 12:41:29 functional-728898 crio[4913]: time="2025-04-07 12:41:29.453993636Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744029689453904416,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:294016,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2f61b69e-741c-4c4a-a7ed-81b2d7536139 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 12:41:29 functional-728898 crio[4913]: time="2025-04-07 12:41:29.454739871Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81290e51-6e56-4526-93b5-1a1b65e73ca0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 12:41:29 functional-728898 crio[4913]: time="2025-04-07 12:41:29.454805559Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81290e51-6e56-4526-93b5-1a1b65e73ca0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 12:41:29 functional-728898 crio[4913]: time="2025-04-07 12:41:29.455168584Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61cfcaeb35143eb0de95458d3dc252bbfb821bc28f3272f149231a0ee0197135,PodSandboxId:8308802280b4a5804fc9617d78904a82c0e876eb9592537b7dcc7b749f9f7b3f,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1744029179956536158,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-hchz9,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4a9d61d0-b68a-4d16-863d-ca4fe7e10fa6,},Annotations:map[string]string{io.kub
ernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79e6ae6b95171565d73c07c01490487ea7fee11ba11f7e6a30a204b86335a407,PodSandboxId:0433f6fe349c330301c1bcfbecdbccfac0a3f532893bcdadc5bfd9526ec100d2,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1744029177103675333,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-wcf49,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: d7735f03-dec7-4187-bb41-90c619473df0,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac976c79c64830e17267759075a44ad7a7741506cb370dd4507b325c9883b76,PodSandboxId:f4c1f4934a89650182c3ee05d95be1c6928f58fa9bca64f02bc33930b7548f8c,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1744029139668414225,Labels:map[string]string{io.
kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ce03bda2-e049-4c16-bc7f-d21929aa75ce,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45c2dffaca28cb25ab301b0b159e9c767eb22331ef283bf92cdae10b61c8817c,PodSandboxId:7e2f3b699d4fb9be005486c9871944617cd10bc523ba3434f68dfa4230035c6c,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1744029086551862239,Labels:map[string]string{io.kubernetes.container.name: echoserver,
io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-xlzk8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 238652ef-7f48-42be-ab1b-58c79e679abd,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ce4150e5ca64bb249585aa874ac2c0b3b5124c0074c0a1af7fde49ba07e1b1f,PodSandboxId:216bf89ff12fac735b99994741760aef9fc0b788d353079aae87be9724ce26b3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744029078887662024,Labels:map[string]string{io.kubernetes.container.name: ngi
nx,io.kubernetes.pod.name: nginx-svc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a5f880a-151c-4e01-a038-26cdad8a4086,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7452ca27af16f51d13a5bf8e5a81261c8c290c5116c5e3ffd0be0e33e6f32287,PodSandboxId:1e6bf2f675ca0671acae94bcb950f153d6d57fec44aa605a312bd2739e52e6ce,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1744029074691418
251,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-hq2l4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9cd43b15-5de0-4b18-93e0-1a20605b3d3b,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d56da9bec71eb1199a87233f6976d8c3f87e73cdc0e336936406ffa0bc414016,PodSandboxId:250183d67c0abdfbbca1e7baf050e092bd026133834d085970941362166396da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744029049854504044,Labels:map[string]string{i
o.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-hlndz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7057602-1190-4a5f-90ea-148e2d5babf4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c6013453428c3c541fcfd3498084ea17cff6776f774569e24a2a5d87d36ae99,PodSandboxId:f04c38eadb19133055219ae51e4e174500343871db2f4dfc1bb20ada640e2053,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744029049401719544,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-52vhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 752fe55a-003a-40fa-a1a6-dd762db82e0c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9930e272f17b71e45f60674688fe7b134094010f20529b2bbc81518611620890,PodSandboxId:cde9f0abcc1c797e36a430a55d1e94b0ef37b148
9754e97a4bf96d406dc5d4a5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744029049217840356,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16ae952f-9e64-4b01-ad75-b556b112cb03,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad18f7874f75273858975f81e17ce0d514bc3c3c0663b789c0cf9b97d6c5db9a,PodSandboxId:054db3ce1b0abc5de544916f4dac644f5cff1656e8a24d6a9d2f
adad44a09d61,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744029047886556263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-48r4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04060509-a7f2-4be6-b807-c709c6c1e3eb,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d400b079c78d32546aa75b8827e7628204648f23b4d6286f7300db0f84913f6,PodSandboxId:b034bf0e0dd3013ae0b2011c772deded3e4ce2a27790d0c4c3a6e6d1cbd44c99,Metadata:&ContainerMe
tadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744029036307272948,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5068eeb41b70734dc3c2019b8280bc0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3121037762e1f6146a3c087b19535c589d603013d08a88f808c08e4ce645b11,PodSandboxId:257cef0f19a3ed52979effb0faa63323b90ae964f588e4b321504fcfc77acb09,Metadata:&ContainerMetadata{Name:k
ube-scheduler,Attempt:3,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744029036288817771,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65277f868c895be86e767f8d62913710,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5dbf84a635343ac9efc499953db9a64f8df84e22403f3a89220b3c6e7de8128,PodSandboxId:ec3d7895c8a2a83d7dbe96b7dfaa1ae6a8f299a9e4094bec221d230cfd077793,Metadata:&ContainerMetadata{Name:etcd,Attempt:3
,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744029036252669290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9baea30e9eb118c89ddd369d5e0d1bd6,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7486c26ef9200d679640eddc4b175bc9720bddd11b0a617684cb0f38e2f2290a,PodSandboxId:d836f50a336ae77a193527ba127a02d01931d05e46193846e9b9dfdc35d5f99e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Imag
e:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744029036246488459,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d592d768ff54d22d51886bcc5507a7a8,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4448b2213cb7fb1f4a035fa143c4303e3f68cedd25140b0278531edf8fa55a1f,PodSandboxId:8c1c591e2b17c9eba2bf79646fd6152edc2905e781fd186bc7dc497ebe082208,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec
{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1744028774875808632,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-728898,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5068eeb41b70734dc3c2019b8280bc0f,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=81290e51-6e56-4526-93b5-1a1b65e73ca0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	61cfcaeb35143       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   8 minutes ago       Running             dashboard-metrics-scraper   0                   8308802280b4a       dashboard-metrics-scraper-5d59dccf9b-hchz9
	79e6ae6b95171       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         8 minutes ago       Running             kubernetes-dashboard        0                   0433f6fe349c3       kubernetes-dashboard-7779f9b69b-wcf49
	3ac976c79c648       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              9 minutes ago       Exited              mount-munger                0                   f4c1f4934a896       busybox-mount
	45c2dffaca28c       82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410                                                 10 minutes ago      Running             echoserver                  0                   7e2f3b699d4fb       hello-node-connect-58f9cf68d8-xlzk8
	8ce4150e5ca64       docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591                  10 minutes ago      Running             nginx                       0                   216bf89ff12fa       nginx-svc
	7452ca27af16f       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               10 minutes ago      Running             echoserver                  0                   1e6bf2f675ca0       hello-node-fcfd88b6f-hq2l4
	d56da9bec71eb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 10 minutes ago      Running             coredns                     0                   250183d67c0ab       coredns-668d6bf9bc-hlndz
	6c6013453428c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 10 minutes ago      Running             coredns                     0                   f04c38eadb191       coredns-668d6bf9bc-52vhr
	9930e272f17b7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         0                   cde9f0abcc1c7       storage-provisioner
	ad18f7874f752       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5                                                 10 minutes ago      Running             kube-proxy                  0                   054db3ce1b0ab       kube-proxy-48r4x
	1d400b079c78d       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef                                                 10 minutes ago      Running             kube-apiserver              1                   b034bf0e0dd30       kube-apiserver-functional-728898
	c3121037762e1       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d                                                 10 minutes ago      Running             kube-scheduler              3                   257cef0f19a3e       kube-scheduler-functional-728898
	a5dbf84a63534       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                 10 minutes ago      Running             etcd                        3                   ec3d7895c8a2a       etcd-functional-728898
	7486c26ef9200       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389                                                 10 minutes ago      Running             kube-controller-manager     3                   d836f50a336ae       kube-controller-manager-functional-728898
	4448b2213cb7f       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef                                                 15 minutes ago      Exited              kube-apiserver              0                   8c1c591e2b17c       kube-apiserver-functional-728898
	
	
	==> coredns [6c6013453428c3c541fcfd3498084ea17cff6776f774569e24a2a5d87d36ae99] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [d56da9bec71eb1199a87233f6976d8c3f87e73cdc0e336936406ffa0bc414016] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               functional-728898
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-728898
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=33e6edc58d2014d70e908473920ef4ac8eae1e43
	                    minikube.k8s.io/name=functional-728898
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_07T12_30_43_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Apr 2025 12:30:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-728898
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Apr 2025 12:41:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Apr 2025 12:38:41 +0000   Mon, 07 Apr 2025 12:30:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Apr 2025 12:38:41 +0000   Mon, 07 Apr 2025 12:30:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Apr 2025 12:38:41 +0000   Mon, 07 Apr 2025 12:30:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Apr 2025 12:38:41 +0000   Mon, 07 Apr 2025 12:30:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.151
	  Hostname:    functional-728898
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 a18e22468a7c4d53b3b481cf5d4ae418
	  System UUID:                a18e2246-8a7c-4d53-b3b4-81cf5d4ae418
	  Boot ID:                    f0284b9d-ade3-45a2-8f61-7ec448c9266a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-58f9cf68d8-xlzk8           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-fcfd88b6f-hq2l4                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-58ccfd96bb-lwjg6                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (18%)    10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m54s
	  kube-system                 coredns-668d6bf9bc-52vhr                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     10m
	  kube-system                 coredns-668d6bf9bc-hlndz                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     10m
	  kube-system                 etcd-functional-728898                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         10m
	  kube-system                 kube-apiserver-functional-728898              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-728898     200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-48r4x                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-functional-728898              100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        dashboard-metrics-scraper-5d59dccf9b-hchz9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m54s
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-wcf49         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (72%)  700m (35%)
	  memory             752Mi (19%)  1040Mi (27%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-728898 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-728898 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-728898 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                kubelet          Node functional-728898 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                kubelet          Node functional-728898 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                kubelet          Node functional-728898 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-728898 event: Registered Node functional-728898 in Controller
	
	
	==> dmesg <==
	[  +6.536598] kauditd_printk_skb: 9 callbacks suppressed
	[  +8.303034] systemd-fstab-generator[3534]: Ignoring "noauto" option for root device
	[ +19.279620] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.758322] systemd-fstab-generator[4757]: Ignoring "noauto" option for root device
	[  +0.154465] systemd-fstab-generator[4775]: Ignoring "noauto" option for root device
	[  +0.239609] systemd-fstab-generator[4789]: Ignoring "noauto" option for root device
	[  +0.149705] systemd-fstab-generator[4801]: Ignoring "noauto" option for root device
	[  +0.317598] systemd-fstab-generator[4829]: Ignoring "noauto" option for root device
	[Apr 7 12:26] kauditd_printk_skb: 156 callbacks suppressed
	[  +0.542217] systemd-fstab-generator[5030]: Ignoring "noauto" option for root device
	[  +2.412188] systemd-fstab-generator[5154]: Ignoring "noauto" option for root device
	[  +5.659491] kauditd_printk_skb: 84 callbacks suppressed
	[Apr 7 12:27] kauditd_printk_skb: 17 callbacks suppressed
	[Apr 7 12:30] systemd-fstab-generator[6290]: Ignoring "noauto" option for root device
	[  +7.055574] systemd-fstab-generator[6632]: Ignoring "noauto" option for root device
	[  +0.101623] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.586609] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.216184] systemd-fstab-generator[6814]: Ignoring "noauto" option for root device
	[  +6.275851] kauditd_printk_skb: 72 callbacks suppressed
	[Apr 7 12:31] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.090629] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.143480] kauditd_printk_skb: 16 callbacks suppressed
	[  +6.360557] kauditd_printk_skb: 3 callbacks suppressed
	[  +8.070659] kauditd_printk_skb: 35 callbacks suppressed
	[Apr 7 12:32] kauditd_printk_skb: 40 callbacks suppressed
	
	
	==> etcd [a5dbf84a635343ac9efc499953db9a64f8df84e22403f3a89220b3c6e7de8128] <==
	{"level":"info","ts":"2025-04-07T12:30:37.671492Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T12:30:37.671536Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T12:30:37.671881Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-07T12:30:37.672484Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-07T12:30:37.672654Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-07T12:30:37.672684Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-07T12:30:37.675735Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.151:2379"}
	{"level":"info","ts":"2025-04-07T12:31:26.078812Z","caller":"traceutil/trace.go:171","msg":"trace[1497364407] transaction","detail":"{read_only:false; response_revision:559; number_of_response:1; }","duration":"376.791204ms","start":"2025-04-07T12:31:25.702000Z","end":"2025-04-07T12:31:26.078791Z","steps":["trace[1497364407] 'process raft request'  (duration: 376.679151ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T12:31:26.079261Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-07T12:31:25.701975Z","time spent":"376.914387ms","remote":"127.0.0.1:40850","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:558 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-04-07T12:31:26.079674Z","caller":"traceutil/trace.go:171","msg":"trace[1599381144] linearizableReadLoop","detail":"{readStateIndex:583; appliedIndex:583; }","duration":"244.467824ms","start":"2025-04-07T12:31:25.835149Z","end":"2025-04-07T12:31:26.079616Z","steps":["trace[1599381144] 'read index received'  (duration: 244.463381ms)","trace[1599381144] 'applied index is now lower than readState.Index'  (duration: 2.531µs)"],"step_count":2}
	{"level":"warn","ts":"2025-04-07T12:31:26.079776Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"244.617123ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-07T12:31:26.081228Z","caller":"traceutil/trace.go:171","msg":"trace[1595407039] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:559; }","duration":"244.68723ms","start":"2025-04-07T12:31:25.835126Z","end":"2025-04-07T12:31:26.079813Z","steps":["trace[1595407039] 'agreement among raft nodes before linearized reading'  (duration: 244.618359ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T12:31:31.503518Z","caller":"traceutil/trace.go:171","msg":"trace[1928162418] linearizableReadLoop","detail":"{readStateIndex:623; appliedIndex:622; }","duration":"222.812294ms","start":"2025-04-07T12:31:31.280687Z","end":"2025-04-07T12:31:31.503499Z","steps":["trace[1928162418] 'read index received'  (duration: 222.668549ms)","trace[1928162418] 'applied index is now lower than readState.Index'  (duration: 142.815µs)"],"step_count":2}
	{"level":"warn","ts":"2025-04-07T12:31:31.503733Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.031701ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-07T12:31:31.503800Z","caller":"traceutil/trace.go:171","msg":"trace[1484031424] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:597; }","duration":"223.131344ms","start":"2025-04-07T12:31:31.280656Z","end":"2025-04-07T12:31:31.503787Z","steps":["trace[1484031424] 'agreement among raft nodes before linearized reading'  (duration: 223.031182ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T12:31:31.503992Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.794182ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-07T12:31:31.504046Z","caller":"traceutil/trace.go:171","msg":"trace[1070217299] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:597; }","duration":"176.878442ms","start":"2025-04-07T12:31:31.327157Z","end":"2025-04-07T12:31:31.504036Z","steps":["trace[1070217299] 'agreement among raft nodes before linearized reading'  (duration: 176.725766ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T12:32:56.885407Z","caller":"traceutil/trace.go:171","msg":"trace[1165139005] transaction","detail":"{read_only:false; response_revision:756; number_of_response:1; }","duration":"238.68683ms","start":"2025-04-07T12:32:56.646704Z","end":"2025-04-07T12:32:56.885391Z","steps":["trace[1165139005] 'process raft request'  (duration: 238.599489ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T12:33:27.279247Z","caller":"traceutil/trace.go:171","msg":"trace[2030718870] linearizableReadLoop","detail":"{readStateIndex:852; appliedIndex:851; }","duration":"116.948433ms","start":"2025-04-07T12:33:27.162284Z","end":"2025-04-07T12:33:27.279233Z","steps":["trace[2030718870] 'read index received'  (duration: 116.787984ms)","trace[2030718870] 'applied index is now lower than readState.Index'  (duration: 160.05µs)"],"step_count":2}
	{"level":"warn","ts":"2025-04-07T12:33:27.279378Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.094147ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-07T12:33:27.279400Z","caller":"traceutil/trace.go:171","msg":"trace[881207067] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:801; }","duration":"117.162071ms","start":"2025-04-07T12:33:27.162232Z","end":"2025-04-07T12:33:27.279394Z","steps":["trace[881207067] 'agreement among raft nodes before linearized reading'  (duration: 117.089321ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T12:33:27.279630Z","caller":"traceutil/trace.go:171","msg":"trace[1471716281] transaction","detail":"{read_only:false; response_revision:801; number_of_response:1; }","duration":"229.823277ms","start":"2025-04-07T12:33:27.049797Z","end":"2025-04-07T12:33:27.279620Z","steps":["trace[1471716281] 'process raft request'  (duration: 229.318566ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T12:40:37.706110Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":929}
	{"level":"info","ts":"2025-04-07T12:40:37.717842Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":929,"took":"11.370046ms","hash":1291112368,"current-db-size-bytes":3923968,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":3923968,"current-db-size-in-use":"3.9 MB"}
	{"level":"info","ts":"2025-04-07T12:40:37.718004Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1291112368,"revision":929,"compact-revision":-1}
	
	
	==> kernel <==
	 12:41:29 up 18 min,  0 users,  load average: 0.06, 0.16, 0.20
	Linux functional-728898 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1d400b079c78d32546aa75b8827e7628204648f23b4d6286f7300db0f84913f6] <==
	I0407 12:30:39.222175       1 controller.go:615] quota admission added evaluator for: namespaces
	I0407 12:30:39.301332       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0407 12:30:40.046404       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0407 12:30:40.058724       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0407 12:30:40.059000       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0407 12:30:41.015289       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0407 12:30:41.118897       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0407 12:30:41.271299       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0407 12:30:41.290830       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.151]
	I0407 12:30:41.292650       1 controller.go:615] quota admission added evaluator for: endpoints
	I0407 12:30:41.301163       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0407 12:30:42.126990       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0407 12:30:42.485542       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0407 12:30:42.514215       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0407 12:30:42.535359       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0407 12:30:47.324258       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0407 12:30:47.530193       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0407 12:31:04.195020       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.105.130.69"}
	I0407 12:31:09.589303       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.59.160"}
	I0407 12:31:12.072503       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.96.97.41"}
	I0407 12:31:21.974531       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.250.166"}
	I0407 12:31:28.109335       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.104.179.210"}
	E0407 12:31:34.020442       1 conn.go:339] Error on socket receive: read tcp 192.168.39.151:8441->192.168.39.1:48646: use of closed network connection
	I0407 12:31:35.269151       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.52.253"}
	I0407 12:31:35.348019       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.229.142"}
	
	
	==> kube-apiserver [4448b2213cb7fb1f4a035fa143c4303e3f68cedd25140b0278531edf8fa55a1f] <==
	W0407 12:30:31.350734       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.435423       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.469400       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.494589       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.511914       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.523312       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.535102       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.535337       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.569704       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.611308       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.635718       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.654664       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.664385       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.671219       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.680018       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.744506       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.749228       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.816075       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.816258       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:31.838879       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:32.032146       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:32.051575       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:32.056430       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:32.063222       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 12:30:32.339353       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [7486c26ef9200d679640eddc4b175bc9720bddd11b0a617684cb0f38e2f2290a] <==
	I0407 12:31:35.098687       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="86.959809ms"
	I0407 12:31:35.153396       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="84.003882ms"
	I0407 12:31:35.157183       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="128.559µs"
	I0407 12:31:35.167835       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="69.078417ms"
	I0407 12:31:35.168134       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="233.503µs"
	I0407 12:31:35.210288       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="82.927µs"
	I0407 12:31:35.244402       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="59.769µs"
	I0407 12:31:43.822441       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-728898"
	I0407 12:32:16.211032       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="41.566µs"
	I0407 12:32:31.438987       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="140.138µs"
	I0407 12:32:44.914182       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-728898"
	I0407 12:32:57.448590       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="25.15825ms"
	I0407 12:32:57.449242       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="56.981µs"
	I0407 12:33:00.460846       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="17.619517ms"
	I0407 12:33:00.463099       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="60.854µs"
	I0407 12:33:15.336079       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-728898"
	I0407 12:33:43.432834       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="45.34µs"
	I0407 12:33:56.435800       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="54.877µs"
	I0407 12:35:19.438739       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="77.409µs"
	I0407 12:35:33.435819       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="46.697µs"
	I0407 12:36:45.433325       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="50.768µs"
	I0407 12:36:56.433661       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="45.254µs"
	I0407 12:38:41.418295       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-728898"
	I0407 12:38:49.439081       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="55.977µs"
	I0407 12:39:03.436003       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="78.958µs"
	
	
	==> kube-proxy [ad18f7874f75273858975f81e17ce0d514bc3c3c0663b789c0cf9b97d6c5db9a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0407 12:30:48.190886       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0407 12:30:48.201043       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.151"]
	E0407 12:30:48.201181       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0407 12:30:48.237798       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0407 12:30:48.237845       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0407 12:30:48.237868       1 server_linux.go:170] "Using iptables Proxier"
	I0407 12:30:48.241774       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0407 12:30:48.242120       1 server.go:497] "Version info" version="v1.32.2"
	I0407 12:30:48.242167       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0407 12:30:48.243461       1 config.go:199] "Starting service config controller"
	I0407 12:30:48.243522       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0407 12:30:48.243554       1 config.go:105] "Starting endpoint slice config controller"
	I0407 12:30:48.243570       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0407 12:30:48.244139       1 config.go:329] "Starting node config controller"
	I0407 12:30:48.244179       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0407 12:30:48.343736       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0407 12:30:48.343791       1 shared_informer.go:320] Caches are synced for service config
	I0407 12:30:48.345592       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c3121037762e1f6146a3c087b19535c589d603013d08a88f808c08e4ce645b11] <==
	W0407 12:30:40.052322       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0407 12:30:40.052411       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:30:40.070143       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0407 12:30:40.070298       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:30:40.242271       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0407 12:30:40.242309       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0407 12:30:40.324029       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0407 12:30:40.325773       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0407 12:30:40.342689       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0407 12:30:40.343174       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 12:30:40.480619       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0407 12:30:40.480659       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:30:40.489790       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0407 12:30:40.489889       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:30:40.499982       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0407 12:30:40.500033       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 12:30:40.562101       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0407 12:30:40.562199       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0407 12:30:40.642746       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0407 12:30:40.642803       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 12:30:40.676807       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0407 12:30:40.677162       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:30:40.698424       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0407 12:30:40.698589       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0407 12:30:43.841714       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 07 12:40:32 functional-728898 kubelet[6639]: E0407 12:40:32.740161    6639 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744029632739734916,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:294016,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 12:40:34 functional-728898 kubelet[6639]: E0407 12:40:34.416952    6639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-lwjg6" podUID="469f1d47-d0fc-45ea-834b-f92ab80cba97"
	Apr 07 12:40:35 functional-728898 kubelet[6639]: E0407 12:40:35.416558    6639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="59ccb8e9-51d3-4535-9029-fdfba6756cdc"
	Apr 07 12:40:42 functional-728898 kubelet[6639]: E0407 12:40:42.446048    6639 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 07 12:40:42 functional-728898 kubelet[6639]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 07 12:40:42 functional-728898 kubelet[6639]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 07 12:40:42 functional-728898 kubelet[6639]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 07 12:40:42 functional-728898 kubelet[6639]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 07 12:40:42 functional-728898 kubelet[6639]: E0407 12:40:42.741561    6639 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744029642741294751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:294016,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 12:40:42 functional-728898 kubelet[6639]: E0407 12:40:42.741743    6639 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744029642741294751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:294016,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 12:40:45 functional-728898 kubelet[6639]: E0407 12:40:45.418256    6639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-lwjg6" podUID="469f1d47-d0fc-45ea-834b-f92ab80cba97"
	Apr 07 12:40:48 functional-728898 kubelet[6639]: E0407 12:40:48.416139    6639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="59ccb8e9-51d3-4535-9029-fdfba6756cdc"
	Apr 07 12:40:52 functional-728898 kubelet[6639]: E0407 12:40:52.744773    6639 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744029652744335450,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:294016,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 12:40:52 functional-728898 kubelet[6639]: E0407 12:40:52.745240    6639 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744029652744335450,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:294016,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 12:40:59 functional-728898 kubelet[6639]: E0407 12:40:59.416873    6639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="59ccb8e9-51d3-4535-9029-fdfba6756cdc"
	Apr 07 12:41:00 functional-728898 kubelet[6639]: E0407 12:41:00.418035    6639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-lwjg6" podUID="469f1d47-d0fc-45ea-834b-f92ab80cba97"
	Apr 07 12:41:02 functional-728898 kubelet[6639]: E0407 12:41:02.748298    6639 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744029662747728821,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:294016,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 12:41:02 functional-728898 kubelet[6639]: E0407 12:41:02.748336    6639 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744029662747728821,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:294016,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 12:41:12 functional-728898 kubelet[6639]: E0407 12:41:12.752106    6639 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744029672751269217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:294016,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 12:41:12 functional-728898 kubelet[6639]: E0407 12:41:12.752474    6639 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744029672751269217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:294016,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 12:41:13 functional-728898 kubelet[6639]: E0407 12:41:13.416190    6639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="59ccb8e9-51d3-4535-9029-fdfba6756cdc"
	Apr 07 12:41:14 functional-728898 kubelet[6639]: E0407 12:41:14.417507    6639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-lwjg6" podUID="469f1d47-d0fc-45ea-834b-f92ab80cba97"
	Apr 07 12:41:22 functional-728898 kubelet[6639]: E0407 12:41:22.754788    6639 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744029682754498542,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:294016,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 12:41:22 functional-728898 kubelet[6639]: E0407 12:41:22.754916    6639 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744029682754498542,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:294016,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 12:41:28 functional-728898 kubelet[6639]: E0407 12:41:28.422049    6639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="59ccb8e9-51d3-4535-9029-fdfba6756cdc"
	
	
	==> kubernetes-dashboard [79e6ae6b95171565d73c07c01490487ea7fee11ba11f7e6a30a204b86335a407] <==
	2025/04/07 12:32:57 Using namespace: kubernetes-dashboard
	2025/04/07 12:32:57 Using in-cluster config to connect to apiserver
	2025/04/07 12:32:57 Using secret token for csrf signing
	2025/04/07 12:32:57 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/04/07 12:32:57 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/04/07 12:32:57 Successful initial request to the apiserver, version: v1.32.2
	2025/04/07 12:32:57 Generating JWE encryption key
	2025/04/07 12:32:57 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/04/07 12:32:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/04/07 12:32:57 Initializing JWE encryption key from synchronized object
	2025/04/07 12:32:57 Creating in-cluster Sidecar client
	2025/04/07 12:32:57 Serving insecurely on HTTP port: 9090
	2025/04/07 12:32:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/04/07 12:33:27 Successful request to sidecar
	2025/04/07 12:32:57 Starting overwatch
	
	
	==> storage-provisioner [9930e272f17b71e45f60674688fe7b134094010f20529b2bbc81518611620890] <==
	I0407 12:30:49.340577       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0407 12:30:49.358856       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0407 12:30:49.359013       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0407 12:30:49.375283       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0407 12:30:49.375512       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-728898_d05a709f-ce9c-4fe9-847c-85935f5dea43!
	I0407 12:30:49.376838       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c3fea848-7c7f-4876-8847-a2c18a541bf2", APIVersion:"v1", ResourceVersion:"395", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-728898_d05a709f-ce9c-4fe9-847c-85935f5dea43 became leader
	I0407 12:30:49.476087       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-728898_d05a709f-ce9c-4fe9-847c-85935f5dea43!
	I0407 12:31:15.591256       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0407 12:31:15.592280       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"f2b97eeb-298e-4a42-bbfa-299ca292c752", APIVersion:"v1", ResourceVersion:"508", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0407 12:31:15.591545       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    35e91b18-6ea1-43db-9800-acb83bd0568f 378 0 2025-04-07 12:30:48 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2025-04-07 12:30:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-f2b97eeb-298e-4a42-bbfa-299ca292c752 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  f2b97eeb-298e-4a42-bbfa-299ca292c752 508 0 2025-04-07 12:31:15 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2025-04-07 12:31:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2025-04-07 12:31:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0407 12:31:15.593022       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-f2b97eeb-298e-4a42-bbfa-299ca292c752" provisioned
	I0407 12:31:15.593111       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0407 12:31:15.593136       1 volume_store.go:212] Trying to save persistentvolume "pvc-f2b97eeb-298e-4a42-bbfa-299ca292c752"
	I0407 12:31:15.608870       1 volume_store.go:219] persistentvolume "pvc-f2b97eeb-298e-4a42-bbfa-299ca292c752" saved
	I0407 12:31:15.611834       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"f2b97eeb-298e-4a42-bbfa-299ca292c752", APIVersion:"v1", ResourceVersion:"508", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-f2b97eeb-298e-4a42-bbfa-299ca292c752
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-728898 -n functional-728898
helpers_test.go:261: (dbg) Run:  kubectl --context functional-728898 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-58ccfd96bb-lwjg6 sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-728898 describe pod busybox-mount mysql-58ccfd96bb-lwjg6 sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-728898 describe pod busybox-mount mysql-58ccfd96bb-lwjg6 sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-728898/192.168.39.151
	Start Time:       Mon, 07 Apr 2025 12:31:29 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.14
	IPs:
	  IP:  10.244.0.14
	Containers:
	  mount-munger:
	    Container ID:  cri-o://3ac976c79c64830e17267759075a44ad7a7741506cb370dd4507b325c9883b76
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 07 Apr 2025 12:32:19 +0000
	      Finished:     Mon, 07 Apr 2025 12:32:19 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8pmkp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-8pmkp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  10m    default-scheduler  Successfully assigned default/busybox-mount to functional-728898
	  Normal  Pulling    10m    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m11s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 4.101s (49.761s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m11s  kubelet            Created container: mount-munger
	  Normal  Started    9m11s  kubelet            Started container mount-munger
	
	
	Name:             mysql-58ccfd96bb-lwjg6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-728898/192.168.39.151
	Start Time:       Mon, 07 Apr 2025 12:31:28 +0000
	Labels:           app=mysql
	                  pod-template-hash=58ccfd96bb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.13
	IPs:
	  IP:           10.244.0.13
	Controlled By:  ReplicaSet/mysql-58ccfd96bb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5gszn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-5gszn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-58ccfd96bb-lwjg6 to functional-728898
	  Warning  Failed     9m15s                  kubelet            Failed to pull image "docker.io/mysql:5.7": initializing source docker://mysql:5.7: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m25s (x2 over 7m59s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m25s (x5 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     2m54s (x5 over 9m15s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m54s (x2 over 4m59s)  kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     99s (x16 over 9m14s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    30s (x21 over 9m14s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-728898/192.168.39.151
	Start Time:       Mon, 07 Apr 2025 12:31:35 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.15
	IPs:
	  IP:  10.244.0.15
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6cmgj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-6cmgj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m55s                  default-scheduler  Successfully assigned default/sp-pod to functional-728898
	  Warning  Failed     6m57s                  kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:54809b2f36d0ff38e8e5362b0239779e4b75c2f19ad70ef047ed050f01506bb4 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m58s (x5 over 9m55s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2m22s (x4 over 8m39s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m22s (x5 over 8m39s)  kubelet            Error: ErrImagePull
	  Warning  Failed     67s (x16 over 8m39s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    2s (x21 over 8m39s)    kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/MySQL FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/MySQL (602.71s)

                                                
                                    
x
+
TestPreload (174.8s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-271062 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-271062 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m38.460089491s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-271062 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-271062 image pull gcr.io/k8s-minikube/busybox: (4.21342978s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-271062
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-271062: (7.326110211s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-271062 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0407 13:26:09.344448 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/functional-728898/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-271062 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m1.055551749s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-271062 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:631: *** TestPreload FAILED at 2025-04-07 13:27:02.218225904 +0000 UTC m=+4417.983137556
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-271062 -n test-preload-271062
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-271062 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-271062 logs -n 25: (1.356514723s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-522935 ssh -n                                                                 | multinode-522935     | jenkins | v1.35.0 | 07 Apr 25 13:11 UTC | 07 Apr 25 13:11 UTC |
	|         | multinode-522935-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-522935 ssh -n multinode-522935 sudo cat                                       | multinode-522935     | jenkins | v1.35.0 | 07 Apr 25 13:11 UTC | 07 Apr 25 13:11 UTC |
	|         | /home/docker/cp-test_multinode-522935-m03_multinode-522935.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-522935 cp multinode-522935-m03:/home/docker/cp-test.txt                       | multinode-522935     | jenkins | v1.35.0 | 07 Apr 25 13:11 UTC | 07 Apr 25 13:11 UTC |
	|         | multinode-522935-m02:/home/docker/cp-test_multinode-522935-m03_multinode-522935-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-522935 ssh -n                                                                 | multinode-522935     | jenkins | v1.35.0 | 07 Apr 25 13:11 UTC | 07 Apr 25 13:11 UTC |
	|         | multinode-522935-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-522935 ssh -n multinode-522935-m02 sudo cat                                   | multinode-522935     | jenkins | v1.35.0 | 07 Apr 25 13:11 UTC | 07 Apr 25 13:11 UTC |
	|         | /home/docker/cp-test_multinode-522935-m03_multinode-522935-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-522935 node stop m03                                                          | multinode-522935     | jenkins | v1.35.0 | 07 Apr 25 13:11 UTC | 07 Apr 25 13:11 UTC |
	| node    | multinode-522935 node start                                                             | multinode-522935     | jenkins | v1.35.0 | 07 Apr 25 13:11 UTC | 07 Apr 25 13:11 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-522935                                                                | multinode-522935     | jenkins | v1.35.0 | 07 Apr 25 13:11 UTC |                     |
	| stop    | -p multinode-522935                                                                     | multinode-522935     | jenkins | v1.35.0 | 07 Apr 25 13:11 UTC | 07 Apr 25 13:15 UTC |
	| start   | -p multinode-522935                                                                     | multinode-522935     | jenkins | v1.35.0 | 07 Apr 25 13:15 UTC | 07 Apr 25 13:18 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-522935                                                                | multinode-522935     | jenkins | v1.35.0 | 07 Apr 25 13:18 UTC |                     |
	| node    | multinode-522935 node delete                                                            | multinode-522935     | jenkins | v1.35.0 | 07 Apr 25 13:18 UTC | 07 Apr 25 13:18 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-522935 stop                                                                   | multinode-522935     | jenkins | v1.35.0 | 07 Apr 25 13:18 UTC | 07 Apr 25 13:21 UTC |
	| start   | -p multinode-522935                                                                     | multinode-522935     | jenkins | v1.35.0 | 07 Apr 25 13:21 UTC | 07 Apr 25 13:23 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-522935                                                                | multinode-522935     | jenkins | v1.35.0 | 07 Apr 25 13:23 UTC |                     |
	| start   | -p multinode-522935-m02                                                                 | multinode-522935-m02 | jenkins | v1.35.0 | 07 Apr 25 13:23 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-522935-m03                                                                 | multinode-522935-m03 | jenkins | v1.35.0 | 07 Apr 25 13:23 UTC | 07 Apr 25 13:24 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-522935                                                                 | multinode-522935     | jenkins | v1.35.0 | 07 Apr 25 13:24 UTC |                     |
	| delete  | -p multinode-522935-m03                                                                 | multinode-522935-m03 | jenkins | v1.35.0 | 07 Apr 25 13:24 UTC | 07 Apr 25 13:24 UTC |
	| delete  | -p multinode-522935                                                                     | multinode-522935     | jenkins | v1.35.0 | 07 Apr 25 13:24 UTC | 07 Apr 25 13:24 UTC |
	| start   | -p test-preload-271062                                                                  | test-preload-271062  | jenkins | v1.35.0 | 07 Apr 25 13:24 UTC | 07 Apr 25 13:25 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-271062 image pull                                                          | test-preload-271062  | jenkins | v1.35.0 | 07 Apr 25 13:25 UTC | 07 Apr 25 13:25 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-271062                                                                  | test-preload-271062  | jenkins | v1.35.0 | 07 Apr 25 13:25 UTC | 07 Apr 25 13:26 UTC |
	| start   | -p test-preload-271062                                                                  | test-preload-271062  | jenkins | v1.35.0 | 07 Apr 25 13:26 UTC | 07 Apr 25 13:27 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-271062 image list                                                          | test-preload-271062  | jenkins | v1.35.0 | 07 Apr 25 13:27 UTC | 07 Apr 25 13:27 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 13:26:00
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 13:26:00.955553 1205243 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:26:00.955767 1205243 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:26:00.955783 1205243 out.go:358] Setting ErrFile to fd 2...
	I0407 13:26:00.955793 1205243 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:26:00.956042 1205243 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1162386/.minikube/bin
	I0407 13:26:00.956813 1205243 out.go:352] Setting JSON to false
	I0407 13:26:00.958029 1205243 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":18505,"bootTime":1744013856,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 13:26:00.958166 1205243 start.go:139] virtualization: kvm guest
	I0407 13:26:00.960945 1205243 out.go:177] * [test-preload-271062] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 13:26:00.963043 1205243 notify.go:220] Checking for updates...
	I0407 13:26:00.963165 1205243 out.go:177]   - MINIKUBE_LOCATION=20602
	I0407 13:26:00.965603 1205243 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 13:26:00.968069 1205243 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20602-1162386/kubeconfig
	I0407 13:26:00.970935 1205243 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1162386/.minikube
	I0407 13:26:00.972978 1205243 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 13:26:00.974738 1205243 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 13:26:00.977228 1205243 config.go:182] Loaded profile config "test-preload-271062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0407 13:26:00.978104 1205243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:26:00.978210 1205243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:26:00.997013 1205243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45493
	I0407 13:26:00.997735 1205243 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:26:00.998402 1205243 main.go:141] libmachine: Using API Version  1
	I0407 13:26:00.998434 1205243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:26:00.999009 1205243 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:26:00.999325 1205243 main.go:141] libmachine: (test-preload-271062) Calling .DriverName
	I0407 13:26:01.002052 1205243 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	I0407 13:26:01.003942 1205243 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 13:26:01.004542 1205243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:26:01.004681 1205243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:26:01.021549 1205243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32907
	I0407 13:26:01.022162 1205243 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:26:01.022700 1205243 main.go:141] libmachine: Using API Version  1
	I0407 13:26:01.022726 1205243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:26:01.023163 1205243 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:26:01.023437 1205243 main.go:141] libmachine: (test-preload-271062) Calling .DriverName
	I0407 13:26:01.066030 1205243 out.go:177] * Using the kvm2 driver based on existing profile
	I0407 13:26:01.067631 1205243 start.go:297] selected driver: kvm2
	I0407 13:26:01.067652 1205243 start.go:901] validating driver "kvm2" against &{Name:test-preload-271062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 Cluster
Name:test-preload-271062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:26:01.067782 1205243 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 13:26:01.068585 1205243 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 13:26:01.068704 1205243 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20602-1162386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0407 13:26:01.086300 1205243 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0407 13:26:01.086742 1205243 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 13:26:01.086788 1205243 cni.go:84] Creating CNI manager for ""
	I0407 13:26:01.086858 1205243 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 13:26:01.086931 1205243 start.go:340] cluster config:
	{Name:test-preload-271062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-271062 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:26:01.087065 1205243 iso.go:125] acquiring lock: {Name:mk51e1827709f7a3810dbd898083f8185ece65eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 13:26:01.089030 1205243 out.go:177] * Starting "test-preload-271062" primary control-plane node in "test-preload-271062" cluster
	I0407 13:26:01.090343 1205243 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0407 13:26:01.115634 1205243 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0407 13:26:01.115715 1205243 cache.go:56] Caching tarball of preloaded images
	I0407 13:26:01.115941 1205243 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0407 13:26:01.117618 1205243 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0407 13:26:01.119012 1205243 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0407 13:26:01.146155 1205243 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0407 13:26:04.565331 1205243 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0407 13:26:04.565439 1205243 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0407 13:26:05.448053 1205243 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0407 13:26:05.448188 1205243 profile.go:143] Saving config to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/test-preload-271062/config.json ...
	I0407 13:26:05.448431 1205243 start.go:360] acquireMachinesLock for test-preload-271062: {Name:mk51d4c744fa92d56cf6ac11b1e792c85ef6709a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 13:26:05.448499 1205243 start.go:364] duration metric: took 43.291µs to acquireMachinesLock for "test-preload-271062"
	I0407 13:26:05.448516 1205243 start.go:96] Skipping create...Using existing machine configuration
	I0407 13:26:05.448522 1205243 fix.go:54] fixHost starting: 
	I0407 13:26:05.448812 1205243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:26:05.448852 1205243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:26:05.464673 1205243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42573
	I0407 13:26:05.465317 1205243 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:26:05.465897 1205243 main.go:141] libmachine: Using API Version  1
	I0407 13:26:05.465923 1205243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:26:05.466291 1205243 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:26:05.466510 1205243 main.go:141] libmachine: (test-preload-271062) Calling .DriverName
	I0407 13:26:05.466654 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetState
	I0407 13:26:05.468946 1205243 fix.go:112] recreateIfNeeded on test-preload-271062: state=Stopped err=<nil>
	I0407 13:26:05.468982 1205243 main.go:141] libmachine: (test-preload-271062) Calling .DriverName
	W0407 13:26:05.469211 1205243 fix.go:138] unexpected machine state, will restart: <nil>
	I0407 13:26:05.471870 1205243 out.go:177] * Restarting existing kvm2 VM for "test-preload-271062" ...
	I0407 13:26:05.473271 1205243 main.go:141] libmachine: (test-preload-271062) Calling .Start
	I0407 13:26:05.473647 1205243 main.go:141] libmachine: (test-preload-271062) starting domain...
	I0407 13:26:05.473673 1205243 main.go:141] libmachine: (test-preload-271062) ensuring networks are active...
	I0407 13:26:05.474600 1205243 main.go:141] libmachine: (test-preload-271062) Ensuring network default is active
	I0407 13:26:05.475013 1205243 main.go:141] libmachine: (test-preload-271062) Ensuring network mk-test-preload-271062 is active
	I0407 13:26:05.475467 1205243 main.go:141] libmachine: (test-preload-271062) getting domain XML...
	I0407 13:26:05.476331 1205243 main.go:141] libmachine: (test-preload-271062) creating domain...
	I0407 13:26:06.829389 1205243 main.go:141] libmachine: (test-preload-271062) waiting for IP...
	I0407 13:26:06.830599 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:06.831311 1205243 main.go:141] libmachine: (test-preload-271062) DBG | unable to find current IP address of domain test-preload-271062 in network mk-test-preload-271062
	I0407 13:26:06.831519 1205243 main.go:141] libmachine: (test-preload-271062) DBG | I0407 13:26:06.831372 1205303 retry.go:31] will retry after 264.240217ms: waiting for domain to come up
	I0407 13:26:07.097226 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:07.098039 1205243 main.go:141] libmachine: (test-preload-271062) DBG | unable to find current IP address of domain test-preload-271062 in network mk-test-preload-271062
	I0407 13:26:07.098090 1205243 main.go:141] libmachine: (test-preload-271062) DBG | I0407 13:26:07.097864 1205303 retry.go:31] will retry after 251.054908ms: waiting for domain to come up
	I0407 13:26:07.350800 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:07.351348 1205243 main.go:141] libmachine: (test-preload-271062) DBG | unable to find current IP address of domain test-preload-271062 in network mk-test-preload-271062
	I0407 13:26:07.351379 1205243 main.go:141] libmachine: (test-preload-271062) DBG | I0407 13:26:07.351315 1205303 retry.go:31] will retry after 433.836831ms: waiting for domain to come up
	I0407 13:26:07.787455 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:07.788228 1205243 main.go:141] libmachine: (test-preload-271062) DBG | unable to find current IP address of domain test-preload-271062 in network mk-test-preload-271062
	I0407 13:26:07.788264 1205243 main.go:141] libmachine: (test-preload-271062) DBG | I0407 13:26:07.788156 1205303 retry.go:31] will retry after 391.001465ms: waiting for domain to come up
	I0407 13:26:08.181234 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:08.181851 1205243 main.go:141] libmachine: (test-preload-271062) DBG | unable to find current IP address of domain test-preload-271062 in network mk-test-preload-271062
	I0407 13:26:08.181889 1205243 main.go:141] libmachine: (test-preload-271062) DBG | I0407 13:26:08.181807 1205303 retry.go:31] will retry after 617.123752ms: waiting for domain to come up
	I0407 13:26:08.800710 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:08.801357 1205243 main.go:141] libmachine: (test-preload-271062) DBG | unable to find current IP address of domain test-preload-271062 in network mk-test-preload-271062
	I0407 13:26:08.801384 1205243 main.go:141] libmachine: (test-preload-271062) DBG | I0407 13:26:08.801316 1205303 retry.go:31] will retry after 715.108531ms: waiting for domain to come up
	I0407 13:26:09.518428 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:09.518853 1205243 main.go:141] libmachine: (test-preload-271062) DBG | unable to find current IP address of domain test-preload-271062 in network mk-test-preload-271062
	I0407 13:26:09.518883 1205243 main.go:141] libmachine: (test-preload-271062) DBG | I0407 13:26:09.518811 1205303 retry.go:31] will retry after 966.96729ms: waiting for domain to come up
	I0407 13:26:10.487967 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:10.489071 1205243 main.go:141] libmachine: (test-preload-271062) DBG | unable to find current IP address of domain test-preload-271062 in network mk-test-preload-271062
	I0407 13:26:10.489114 1205243 main.go:141] libmachine: (test-preload-271062) DBG | I0407 13:26:10.489002 1205303 retry.go:31] will retry after 1.041207548s: waiting for domain to come up
	I0407 13:26:11.532338 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:11.532816 1205243 main.go:141] libmachine: (test-preload-271062) DBG | unable to find current IP address of domain test-preload-271062 in network mk-test-preload-271062
	I0407 13:26:11.532857 1205243 main.go:141] libmachine: (test-preload-271062) DBG | I0407 13:26:11.532802 1205303 retry.go:31] will retry after 1.406069114s: waiting for domain to come up
	I0407 13:26:12.941671 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:12.942172 1205243 main.go:141] libmachine: (test-preload-271062) DBG | unable to find current IP address of domain test-preload-271062 in network mk-test-preload-271062
	I0407 13:26:12.942220 1205243 main.go:141] libmachine: (test-preload-271062) DBG | I0407 13:26:12.942169 1205303 retry.go:31] will retry after 2.201727196s: waiting for domain to come up
	I0407 13:26:15.146912 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:15.147438 1205243 main.go:141] libmachine: (test-preload-271062) DBG | unable to find current IP address of domain test-preload-271062 in network mk-test-preload-271062
	I0407 13:26:15.147470 1205243 main.go:141] libmachine: (test-preload-271062) DBG | I0407 13:26:15.147394 1205303 retry.go:31] will retry after 1.764058157s: waiting for domain to come up
	I0407 13:26:16.913264 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:16.913834 1205243 main.go:141] libmachine: (test-preload-271062) DBG | unable to find current IP address of domain test-preload-271062 in network mk-test-preload-271062
	I0407 13:26:16.913861 1205243 main.go:141] libmachine: (test-preload-271062) DBG | I0407 13:26:16.913776 1205303 retry.go:31] will retry after 3.414565653s: waiting for domain to come up
	I0407 13:26:20.332079 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:20.332985 1205243 main.go:141] libmachine: (test-preload-271062) DBG | unable to find current IP address of domain test-preload-271062 in network mk-test-preload-271062
	I0407 13:26:20.333011 1205243 main.go:141] libmachine: (test-preload-271062) DBG | I0407 13:26:20.332894 1205303 retry.go:31] will retry after 3.714922744s: waiting for domain to come up
	I0407 13:26:24.052021 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:24.052585 1205243 main.go:141] libmachine: (test-preload-271062) found domain IP: 192.168.39.95
	I0407 13:26:24.052619 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has current primary IP address 192.168.39.95 and MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:24.052627 1205243 main.go:141] libmachine: (test-preload-271062) reserving static IP address...
	I0407 13:26:24.053270 1205243 main.go:141] libmachine: (test-preload-271062) DBG | found host DHCP lease matching {name: "test-preload-271062", mac: "52:54:00:e5:8f:41", ip: "192.168.39.95"} in network mk-test-preload-271062: {Iface:virbr1 ExpiryTime:2025-04-07 14:26:16 +0000 UTC Type:0 Mac:52:54:00:e5:8f:41 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:test-preload-271062 Clientid:01:52:54:00:e5:8f:41}
	I0407 13:26:24.053297 1205243 main.go:141] libmachine: (test-preload-271062) DBG | skip adding static IP to network mk-test-preload-271062 - found existing host DHCP lease matching {name: "test-preload-271062", mac: "52:54:00:e5:8f:41", ip: "192.168.39.95"}
	I0407 13:26:24.053311 1205243 main.go:141] libmachine: (test-preload-271062) reserved static IP address 192.168.39.95 for domain test-preload-271062
	I0407 13:26:24.053323 1205243 main.go:141] libmachine: (test-preload-271062) waiting for SSH...
	I0407 13:26:24.053331 1205243 main.go:141] libmachine: (test-preload-271062) DBG | Getting to WaitForSSH function...
	I0407 13:26:24.057475 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:24.058010 1205243 main.go:141] libmachine: (test-preload-271062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8f:41", ip: ""} in network mk-test-preload-271062: {Iface:virbr1 ExpiryTime:2025-04-07 14:26:16 +0000 UTC Type:0 Mac:52:54:00:e5:8f:41 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:test-preload-271062 Clientid:01:52:54:00:e5:8f:41}
	I0407 13:26:24.058048 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined IP address 192.168.39.95 and MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:24.058266 1205243 main.go:141] libmachine: (test-preload-271062) DBG | Using SSH client type: external
	I0407 13:26:24.058346 1205243 main.go:141] libmachine: (test-preload-271062) DBG | Using SSH private key: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/test-preload-271062/id_rsa (-rw-------)
	I0407 13:26:24.058372 1205243 main.go:141] libmachine: (test-preload-271062) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.95 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/test-preload-271062/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0407 13:26:24.058385 1205243 main.go:141] libmachine: (test-preload-271062) DBG | About to run SSH command:
	I0407 13:26:24.058396 1205243 main.go:141] libmachine: (test-preload-271062) DBG | exit 0
	I0407 13:26:24.190361 1205243 main.go:141] libmachine: (test-preload-271062) DBG | SSH cmd err, output: <nil>: 
	I0407 13:26:24.190868 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetConfigRaw
	I0407 13:26:24.191623 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetIP
	I0407 13:26:24.195113 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:24.195492 1205243 main.go:141] libmachine: (test-preload-271062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8f:41", ip: ""} in network mk-test-preload-271062: {Iface:virbr1 ExpiryTime:2025-04-07 14:26:16 +0000 UTC Type:0 Mac:52:54:00:e5:8f:41 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:test-preload-271062 Clientid:01:52:54:00:e5:8f:41}
	I0407 13:26:24.195524 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined IP address 192.168.39.95 and MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:24.195783 1205243 profile.go:143] Saving config to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/test-preload-271062/config.json ...
	I0407 13:26:24.196032 1205243 machine.go:93] provisionDockerMachine start ...
	I0407 13:26:24.196058 1205243 main.go:141] libmachine: (test-preload-271062) Calling .DriverName
	I0407 13:26:24.196355 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHHostname
	I0407 13:26:24.199421 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:24.199827 1205243 main.go:141] libmachine: (test-preload-271062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8f:41", ip: ""} in network mk-test-preload-271062: {Iface:virbr1 ExpiryTime:2025-04-07 14:26:16 +0000 UTC Type:0 Mac:52:54:00:e5:8f:41 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:test-preload-271062 Clientid:01:52:54:00:e5:8f:41}
	I0407 13:26:24.199870 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined IP address 192.168.39.95 and MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:24.200071 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHPort
	I0407 13:26:24.200286 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHKeyPath
	I0407 13:26:24.200493 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHKeyPath
	I0407 13:26:24.200738 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHUsername
	I0407 13:26:24.200963 1205243 main.go:141] libmachine: Using SSH client type: native
	I0407 13:26:24.201302 1205243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0407 13:26:24.201320 1205243 main.go:141] libmachine: About to run SSH command:
	hostname
	I0407 13:26:24.319069 1205243 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0407 13:26:24.319106 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetMachineName
	I0407 13:26:24.319449 1205243 buildroot.go:166] provisioning hostname "test-preload-271062"
	I0407 13:26:24.319485 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetMachineName
	I0407 13:26:24.319735 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHHostname
	I0407 13:26:24.323642 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:24.324306 1205243 main.go:141] libmachine: (test-preload-271062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8f:41", ip: ""} in network mk-test-preload-271062: {Iface:virbr1 ExpiryTime:2025-04-07 14:26:16 +0000 UTC Type:0 Mac:52:54:00:e5:8f:41 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:test-preload-271062 Clientid:01:52:54:00:e5:8f:41}
	I0407 13:26:24.324339 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined IP address 192.168.39.95 and MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:24.324536 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHPort
	I0407 13:26:24.324768 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHKeyPath
	I0407 13:26:24.324937 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHKeyPath
	I0407 13:26:24.325174 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHUsername
	I0407 13:26:24.325365 1205243 main.go:141] libmachine: Using SSH client type: native
	I0407 13:26:24.325635 1205243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0407 13:26:24.325653 1205243 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-271062 && echo "test-preload-271062" | sudo tee /etc/hostname
	I0407 13:26:24.456045 1205243 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-271062
	
	I0407 13:26:24.456083 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHHostname
	I0407 13:26:24.459255 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:24.459567 1205243 main.go:141] libmachine: (test-preload-271062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8f:41", ip: ""} in network mk-test-preload-271062: {Iface:virbr1 ExpiryTime:2025-04-07 14:26:16 +0000 UTC Type:0 Mac:52:54:00:e5:8f:41 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:test-preload-271062 Clientid:01:52:54:00:e5:8f:41}
	I0407 13:26:24.459599 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined IP address 192.168.39.95 and MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:24.459951 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHPort
	I0407 13:26:24.460204 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHKeyPath
	I0407 13:26:24.460492 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHKeyPath
	I0407 13:26:24.460695 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHUsername
	I0407 13:26:24.460892 1205243 main.go:141] libmachine: Using SSH client type: native
	I0407 13:26:24.461111 1205243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0407 13:26:24.461129 1205243 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-271062' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-271062/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-271062' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 13:26:24.586292 1205243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 13:26:24.586331 1205243 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20602-1162386/.minikube CaCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20602-1162386/.minikube}
	I0407 13:26:24.586362 1205243 buildroot.go:174] setting up certificates
	I0407 13:26:24.586373 1205243 provision.go:84] configureAuth start
	I0407 13:26:24.586383 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetMachineName
	I0407 13:26:24.586753 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetIP
	I0407 13:26:24.589575 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:24.589977 1205243 main.go:141] libmachine: (test-preload-271062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8f:41", ip: ""} in network mk-test-preload-271062: {Iface:virbr1 ExpiryTime:2025-04-07 14:26:16 +0000 UTC Type:0 Mac:52:54:00:e5:8f:41 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:test-preload-271062 Clientid:01:52:54:00:e5:8f:41}
	I0407 13:26:24.590022 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined IP address 192.168.39.95 and MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:24.590274 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHHostname
	I0407 13:26:24.592436 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:24.592824 1205243 main.go:141] libmachine: (test-preload-271062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8f:41", ip: ""} in network mk-test-preload-271062: {Iface:virbr1 ExpiryTime:2025-04-07 14:26:16 +0000 UTC Type:0 Mac:52:54:00:e5:8f:41 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:test-preload-271062 Clientid:01:52:54:00:e5:8f:41}
	I0407 13:26:24.592865 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined IP address 192.168.39.95 and MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:24.593059 1205243 provision.go:143] copyHostCerts
	I0407 13:26:24.593119 1205243 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem, removing ...
	I0407 13:26:24.593141 1205243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem
	I0407 13:26:24.593213 1205243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem (1078 bytes)
	I0407 13:26:24.593318 1205243 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem, removing ...
	I0407 13:26:24.593325 1205243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem
	I0407 13:26:24.593348 1205243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem (1123 bytes)
	I0407 13:26:24.593399 1205243 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem, removing ...
	I0407 13:26:24.593406 1205243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem
	I0407 13:26:24.593429 1205243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem (1675 bytes)
	I0407 13:26:24.593489 1205243 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem org=jenkins.test-preload-271062 san=[127.0.0.1 192.168.39.95 localhost minikube test-preload-271062]
	I0407 13:26:24.618378 1205243 provision.go:177] copyRemoteCerts
	I0407 13:26:24.618447 1205243 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 13:26:24.618474 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHHostname
	I0407 13:26:24.621552 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:24.622037 1205243 main.go:141] libmachine: (test-preload-271062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8f:41", ip: ""} in network mk-test-preload-271062: {Iface:virbr1 ExpiryTime:2025-04-07 14:26:16 +0000 UTC Type:0 Mac:52:54:00:e5:8f:41 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:test-preload-271062 Clientid:01:52:54:00:e5:8f:41}
	I0407 13:26:24.622063 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined IP address 192.168.39.95 and MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:24.622265 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHPort
	I0407 13:26:24.622486 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHKeyPath
	I0407 13:26:24.622680 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHUsername
	I0407 13:26:24.622798 1205243 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/test-preload-271062/id_rsa Username:docker}
	I0407 13:26:24.709745 1205243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0407 13:26:24.734025 1205243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0407 13:26:24.758038 1205243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 13:26:24.782118 1205243 provision.go:87] duration metric: took 195.727081ms to configureAuth
	I0407 13:26:24.782159 1205243 buildroot.go:189] setting minikube options for container-runtime
	I0407 13:26:24.782376 1205243 config.go:182] Loaded profile config "test-preload-271062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0407 13:26:24.782476 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHHostname
	I0407 13:26:24.785071 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:24.785559 1205243 main.go:141] libmachine: (test-preload-271062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8f:41", ip: ""} in network mk-test-preload-271062: {Iface:virbr1 ExpiryTime:2025-04-07 14:26:16 +0000 UTC Type:0 Mac:52:54:00:e5:8f:41 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:test-preload-271062 Clientid:01:52:54:00:e5:8f:41}
	I0407 13:26:24.785582 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined IP address 192.168.39.95 and MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:24.785905 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHPort
	I0407 13:26:24.786111 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHKeyPath
	I0407 13:26:24.786292 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHKeyPath
	I0407 13:26:24.786493 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHUsername
	I0407 13:26:24.786682 1205243 main.go:141] libmachine: Using SSH client type: native
	I0407 13:26:24.786949 1205243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0407 13:26:24.786966 1205243 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0407 13:26:25.016153 1205243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0407 13:26:25.016182 1205243 machine.go:96] duration metric: took 820.133264ms to provisionDockerMachine
	I0407 13:26:25.016195 1205243 start.go:293] postStartSetup for "test-preload-271062" (driver="kvm2")
	I0407 13:26:25.016220 1205243 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 13:26:25.016245 1205243 main.go:141] libmachine: (test-preload-271062) Calling .DriverName
	I0407 13:26:25.016612 1205243 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 13:26:25.016659 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHHostname
	I0407 13:26:25.019475 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:25.019922 1205243 main.go:141] libmachine: (test-preload-271062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8f:41", ip: ""} in network mk-test-preload-271062: {Iface:virbr1 ExpiryTime:2025-04-07 14:26:16 +0000 UTC Type:0 Mac:52:54:00:e5:8f:41 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:test-preload-271062 Clientid:01:52:54:00:e5:8f:41}
	I0407 13:26:25.019950 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined IP address 192.168.39.95 and MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:25.020204 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHPort
	I0407 13:26:25.020466 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHKeyPath
	I0407 13:26:25.020637 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHUsername
	I0407 13:26:25.020768 1205243 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/test-preload-271062/id_rsa Username:docker}
	I0407 13:26:25.109092 1205243 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 13:26:25.113471 1205243 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 13:26:25.113504 1205243 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-1162386/.minikube/addons for local assets ...
	I0407 13:26:25.113584 1205243 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-1162386/.minikube/files for local assets ...
	I0407 13:26:25.113664 1205243 filesync.go:149] local asset: /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem -> 11697162.pem in /etc/ssl/certs
	I0407 13:26:25.113786 1205243 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 13:26:25.123392 1205243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem --> /etc/ssl/certs/11697162.pem (1708 bytes)
	I0407 13:26:25.147182 1205243 start.go:296] duration metric: took 130.968995ms for postStartSetup
	I0407 13:26:25.147240 1205243 fix.go:56] duration metric: took 19.698717194s for fixHost
	I0407 13:26:25.147271 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHHostname
	I0407 13:26:25.150326 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:25.150811 1205243 main.go:141] libmachine: (test-preload-271062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8f:41", ip: ""} in network mk-test-preload-271062: {Iface:virbr1 ExpiryTime:2025-04-07 14:26:16 +0000 UTC Type:0 Mac:52:54:00:e5:8f:41 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:test-preload-271062 Clientid:01:52:54:00:e5:8f:41}
	I0407 13:26:25.150840 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined IP address 192.168.39.95 and MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:25.151029 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHPort
	I0407 13:26:25.151296 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHKeyPath
	I0407 13:26:25.151545 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHKeyPath
	I0407 13:26:25.151756 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHUsername
	I0407 13:26:25.151932 1205243 main.go:141] libmachine: Using SSH client type: native
	I0407 13:26:25.152136 1205243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0407 13:26:25.152146 1205243 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0407 13:26:25.266591 1205243 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744032385.237172517
	
	I0407 13:26:25.266622 1205243 fix.go:216] guest clock: 1744032385.237172517
	I0407 13:26:25.266635 1205243 fix.go:229] Guest: 2025-04-07 13:26:25.237172517 +0000 UTC Remote: 2025-04-07 13:26:25.147245875 +0000 UTC m=+24.237712345 (delta=89.926642ms)
	I0407 13:26:25.266668 1205243 fix.go:200] guest clock delta is within tolerance: 89.926642ms
	I0407 13:26:25.266684 1205243 start.go:83] releasing machines lock for "test-preload-271062", held for 19.818165928s
	I0407 13:26:25.266720 1205243 main.go:141] libmachine: (test-preload-271062) Calling .DriverName
	I0407 13:26:25.267027 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetIP
	I0407 13:26:25.270108 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:25.270500 1205243 main.go:141] libmachine: (test-preload-271062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8f:41", ip: ""} in network mk-test-preload-271062: {Iface:virbr1 ExpiryTime:2025-04-07 14:26:16 +0000 UTC Type:0 Mac:52:54:00:e5:8f:41 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:test-preload-271062 Clientid:01:52:54:00:e5:8f:41}
	I0407 13:26:25.270534 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined IP address 192.168.39.95 and MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:25.270651 1205243 main.go:141] libmachine: (test-preload-271062) Calling .DriverName
	I0407 13:26:25.271276 1205243 main.go:141] libmachine: (test-preload-271062) Calling .DriverName
	I0407 13:26:25.271494 1205243 main.go:141] libmachine: (test-preload-271062) Calling .DriverName
	I0407 13:26:25.271654 1205243 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0407 13:26:25.271711 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHHostname
	I0407 13:26:25.271819 1205243 ssh_runner.go:195] Run: cat /version.json
	I0407 13:26:25.271848 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHHostname
	I0407 13:26:25.274564 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:25.274903 1205243 main.go:141] libmachine: (test-preload-271062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8f:41", ip: ""} in network mk-test-preload-271062: {Iface:virbr1 ExpiryTime:2025-04-07 14:26:16 +0000 UTC Type:0 Mac:52:54:00:e5:8f:41 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:test-preload-271062 Clientid:01:52:54:00:e5:8f:41}
	I0407 13:26:25.274929 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:25.274954 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined IP address 192.168.39.95 and MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:25.275172 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHPort
	I0407 13:26:25.275387 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHKeyPath
	I0407 13:26:25.275425 1205243 main.go:141] libmachine: (test-preload-271062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8f:41", ip: ""} in network mk-test-preload-271062: {Iface:virbr1 ExpiryTime:2025-04-07 14:26:16 +0000 UTC Type:0 Mac:52:54:00:e5:8f:41 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:test-preload-271062 Clientid:01:52:54:00:e5:8f:41}
	I0407 13:26:25.275450 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined IP address 192.168.39.95 and MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:25.275582 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHUsername
	I0407 13:26:25.275677 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHPort
	I0407 13:26:25.275783 1205243 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/test-preload-271062/id_rsa Username:docker}
	I0407 13:26:25.275839 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHKeyPath
	I0407 13:26:25.275993 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHUsername
	I0407 13:26:25.276118 1205243 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/test-preload-271062/id_rsa Username:docker}
	I0407 13:26:25.382577 1205243 ssh_runner.go:195] Run: systemctl --version
	I0407 13:26:25.388588 1205243 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0407 13:26:25.538271 1205243 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0407 13:26:25.545020 1205243 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 13:26:25.545102 1205243 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 13:26:25.564135 1205243 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0407 13:26:25.564167 1205243 start.go:495] detecting cgroup driver to use...
	I0407 13:26:25.564247 1205243 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 13:26:25.580208 1205243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 13:26:25.595174 1205243 docker.go:217] disabling cri-docker service (if available) ...
	I0407 13:26:25.595261 1205243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0407 13:26:25.610246 1205243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0407 13:26:25.625310 1205243 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0407 13:26:25.740804 1205243 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0407 13:26:25.873292 1205243 docker.go:233] disabling docker service ...
	I0407 13:26:25.873370 1205243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0407 13:26:25.890295 1205243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0407 13:26:25.905403 1205243 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0407 13:26:26.051598 1205243 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0407 13:26:26.180947 1205243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0407 13:26:26.196883 1205243 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 13:26:26.217856 1205243 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0407 13:26:26.217935 1205243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:26:26.229863 1205243 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0407 13:26:26.230048 1205243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:26:26.242751 1205243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:26:26.253903 1205243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:26:26.265626 1205243 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 13:26:26.278265 1205243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:26:26.292718 1205243 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:26:26.314074 1205243 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:26:26.325257 1205243 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 13:26:26.335484 1205243 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0407 13:26:26.335550 1205243 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0407 13:26:26.350053 1205243 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 13:26:26.360915 1205243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:26:26.489494 1205243 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0407 13:26:26.596613 1205243 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0407 13:26:26.596706 1205243 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0407 13:26:26.602168 1205243 start.go:563] Will wait 60s for crictl version
	I0407 13:26:26.602250 1205243 ssh_runner.go:195] Run: which crictl
	I0407 13:26:26.606833 1205243 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 13:26:26.647624 1205243 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0407 13:26:26.647707 1205243 ssh_runner.go:195] Run: crio --version
	I0407 13:26:26.678386 1205243 ssh_runner.go:195] Run: crio --version
	I0407 13:26:26.709892 1205243 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0407 13:26:26.711817 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetIP
	I0407 13:26:26.715111 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:26.715781 1205243 main.go:141] libmachine: (test-preload-271062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8f:41", ip: ""} in network mk-test-preload-271062: {Iface:virbr1 ExpiryTime:2025-04-07 14:26:16 +0000 UTC Type:0 Mac:52:54:00:e5:8f:41 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:test-preload-271062 Clientid:01:52:54:00:e5:8f:41}
	I0407 13:26:26.715819 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined IP address 192.168.39.95 and MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:26.716222 1205243 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0407 13:26:26.720672 1205243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:26:26.733446 1205243 kubeadm.go:883] updating cluster {Name:test-preload-271062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-prelo
ad-271062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 13:26:26.733561 1205243 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0407 13:26:26.733613 1205243 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 13:26:26.774692 1205243 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0407 13:26:26.774782 1205243 ssh_runner.go:195] Run: which lz4
	I0407 13:26:26.779659 1205243 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0407 13:26:26.784589 1205243 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0407 13:26:26.784643 1205243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0407 13:26:28.486113 1205243 crio.go:462] duration metric: took 1.706496198s to copy over tarball
	I0407 13:26:28.486262 1205243 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0407 13:26:31.283712 1205243 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.797413589s)
	I0407 13:26:31.283747 1205243 crio.go:469] duration metric: took 2.797595671s to extract the tarball
	I0407 13:26:31.283754 1205243 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0407 13:26:31.326610 1205243 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 13:26:31.377220 1205243 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0407 13:26:31.377251 1205243 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0407 13:26:31.377342 1205243 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 13:26:31.377372 1205243 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0407 13:26:31.377405 1205243 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0407 13:26:31.377429 1205243 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0407 13:26:31.377446 1205243 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0407 13:26:31.377474 1205243 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0407 13:26:31.377407 1205243 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0407 13:26:31.377387 1205243 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0407 13:26:31.378895 1205243 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0407 13:26:31.378895 1205243 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0407 13:26:31.378915 1205243 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 13:26:31.378926 1205243 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0407 13:26:31.378897 1205243 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0407 13:26:31.378895 1205243 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0407 13:26:31.378896 1205243 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0407 13:26:31.378969 1205243 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0407 13:26:31.524407 1205243 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0407 13:26:31.528385 1205243 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0407 13:26:31.533751 1205243 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0407 13:26:31.538512 1205243 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0407 13:26:31.544329 1205243 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0407 13:26:31.547091 1205243 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0407 13:26:31.577022 1205243 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0407 13:26:31.622242 1205243 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0407 13:26:31.622313 1205243 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0407 13:26:31.622371 1205243 ssh_runner.go:195] Run: which crictl
	I0407 13:26:31.680253 1205243 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0407 13:26:31.680318 1205243 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0407 13:26:31.680441 1205243 ssh_runner.go:195] Run: which crictl
	I0407 13:26:31.729188 1205243 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0407 13:26:31.729266 1205243 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0407 13:26:31.729331 1205243 ssh_runner.go:195] Run: which crictl
	I0407 13:26:31.740059 1205243 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0407 13:26:31.740121 1205243 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0407 13:26:31.740175 1205243 ssh_runner.go:195] Run: which crictl
	I0407 13:26:31.743610 1205243 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0407 13:26:31.743662 1205243 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0407 13:26:31.743677 1205243 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0407 13:26:31.743703 1205243 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0407 13:26:31.743741 1205243 ssh_runner.go:195] Run: which crictl
	I0407 13:26:31.743762 1205243 ssh_runner.go:195] Run: which crictl
	I0407 13:26:31.743618 1205243 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0407 13:26:31.743800 1205243 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0407 13:26:31.743828 1205243 ssh_runner.go:195] Run: which crictl
	I0407 13:26:31.743848 1205243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0407 13:26:31.743883 1205243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0407 13:26:31.743941 1205243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0407 13:26:31.746524 1205243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0407 13:26:31.758875 1205243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0407 13:26:31.758880 1205243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0407 13:26:31.906552 1205243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0407 13:26:31.906609 1205243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0407 13:26:31.906667 1205243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0407 13:26:31.907518 1205243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0407 13:26:31.907592 1205243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0407 13:26:31.907698 1205243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0407 13:26:31.907723 1205243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0407 13:26:32.062958 1205243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0407 13:26:32.063036 1205243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0407 13:26:32.063072 1205243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0407 13:26:32.086937 1205243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0407 13:26:32.087003 1205243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0407 13:26:32.087303 1205243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0407 13:26:32.087313 1205243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0407 13:26:32.133620 1205243 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0407 13:26:32.133762 1205243 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0407 13:26:32.255645 1205243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0407 13:26:32.255954 1205243 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0407 13:26:32.255980 1205243 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0407 13:26:32.256086 1205243 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0407 13:26:32.256225 1205243 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0407 13:26:32.273599 1205243 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0407 13:26:32.273781 1205243 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0407 13:26:32.273790 1205243 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0407 13:26:32.273923 1205243 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0407 13:26:32.273945 1205243 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0407 13:26:32.273984 1205243 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0407 13:26:32.274017 1205243 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0407 13:26:32.274034 1205243 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0407 13:26:32.274066 1205243 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0407 13:26:32.330828 1205243 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0407 13:26:32.330832 1205243 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0407 13:26:32.330925 1205243 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0407 13:26:32.330998 1205243 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0407 13:26:33.302372 1205243 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 13:26:35.560567 1205243 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (3.286467115s)
	I0407 13:26:35.560614 1205243 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0407 13:26:35.560627 1205243 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: (3.286824084s)
	I0407 13:26:35.560644 1205243 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0407 13:26:35.560662 1205243 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0407 13:26:35.560683 1205243 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4: (3.286628039s)
	I0407 13:26:35.560697 1205243 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: (3.286739885s)
	I0407 13:26:35.560707 1205243 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0407 13:26:35.560708 1205243 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0407 13:26:35.560712 1205243 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0407 13:26:35.560740 1205243 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (3.229722677s)
	I0407 13:26:35.560757 1205243 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0407 13:26:35.560786 1205243 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.258383002s)
	I0407 13:26:36.014946 1205243 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0407 13:26:36.015080 1205243 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0407 13:26:36.015164 1205243 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0407 13:26:36.466374 1205243 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0407 13:26:36.466434 1205243 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0407 13:26:36.466489 1205243 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0407 13:26:36.617143 1205243 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0407 13:26:36.617212 1205243 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0407 13:26:36.617284 1205243 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0407 13:26:37.471537 1205243 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0407 13:26:37.471606 1205243 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0407 13:26:37.471668 1205243 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0407 13:26:38.219442 1205243 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0407 13:26:38.219505 1205243 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0407 13:26:38.219567 1205243 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0407 13:26:38.971815 1205243 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0407 13:26:38.971877 1205243 cache_images.go:123] Successfully loaded all cached images
	I0407 13:26:38.971886 1205243 cache_images.go:92] duration metric: took 7.594611975s to LoadCachedImages
	I0407 13:26:38.971903 1205243 kubeadm.go:934] updating node { 192.168.39.95 8443 v1.24.4 crio true true} ...
	I0407 13:26:38.972051 1205243 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-271062 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-271062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 13:26:38.972128 1205243 ssh_runner.go:195] Run: crio config
	I0407 13:26:39.022399 1205243 cni.go:84] Creating CNI manager for ""
	I0407 13:26:39.022433 1205243 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 13:26:39.022448 1205243 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0407 13:26:39.022474 1205243 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.95 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-271062 NodeName:test-preload-271062 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.95"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.95 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0407 13:26:39.022645 1205243 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.95
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-271062"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.95
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.95"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 13:26:39.022730 1205243 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0407 13:26:39.034460 1205243 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 13:26:39.034568 1205243 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 13:26:39.045775 1205243 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0407 13:26:39.066164 1205243 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 13:26:39.087154 1205243 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0407 13:26:39.109215 1205243 ssh_runner.go:195] Run: grep 192.168.39.95	control-plane.minikube.internal$ /etc/hosts
	I0407 13:26:39.114511 1205243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.95	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:26:39.132516 1205243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:26:39.281138 1205243 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:26:39.302114 1205243 certs.go:68] Setting up /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/test-preload-271062 for IP: 192.168.39.95
	I0407 13:26:39.302143 1205243 certs.go:194] generating shared ca certs ...
	I0407 13:26:39.302165 1205243 certs.go:226] acquiring lock for ca certs: {Name:mk8e89191fca7f2111bdd08c345368f593b0d5d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:26:39.302356 1205243 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.key
	I0407 13:26:39.302411 1205243 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.key
	I0407 13:26:39.302430 1205243 certs.go:256] generating profile certs ...
	I0407 13:26:39.302533 1205243 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/test-preload-271062/client.key
	I0407 13:26:39.302615 1205243 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/test-preload-271062/apiserver.key.dce2b916
	I0407 13:26:39.302670 1205243 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/test-preload-271062/proxy-client.key
	I0407 13:26:39.302828 1205243 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716.pem (1338 bytes)
	W0407 13:26:39.302874 1205243 certs.go:480] ignoring /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716_empty.pem, impossibly tiny 0 bytes
	I0407 13:26:39.302884 1205243 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem (1675 bytes)
	I0407 13:26:39.302921 1205243 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem (1078 bytes)
	I0407 13:26:39.302954 1205243 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem (1123 bytes)
	I0407 13:26:39.302995 1205243 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem (1675 bytes)
	I0407 13:26:39.303056 1205243 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem (1708 bytes)
	I0407 13:26:39.303673 1205243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 13:26:39.357781 1205243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 13:26:39.403630 1205243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 13:26:39.443670 1205243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0407 13:26:39.491909 1205243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/test-preload-271062/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0407 13:26:39.526056 1205243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/test-preload-271062/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0407 13:26:39.557875 1205243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/test-preload-271062/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 13:26:39.605005 1205243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/test-preload-271062/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0407 13:26:39.633311 1205243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem --> /usr/share/ca-certificates/11697162.pem (1708 bytes)
	I0407 13:26:39.662680 1205243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 13:26:39.692920 1205243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716.pem --> /usr/share/ca-certificates/1169716.pem (1338 bytes)
	I0407 13:26:39.723096 1205243 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 13:26:39.745092 1205243 ssh_runner.go:195] Run: openssl version
	I0407 13:26:39.752243 1205243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11697162.pem && ln -fs /usr/share/ca-certificates/11697162.pem /etc/ssl/certs/11697162.pem"
	I0407 13:26:39.764928 1205243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11697162.pem
	I0407 13:26:39.770809 1205243 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 12:22 /usr/share/ca-certificates/11697162.pem
	I0407 13:26:39.770919 1205243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11697162.pem
	I0407 13:26:39.778150 1205243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11697162.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 13:26:39.791016 1205243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 13:26:39.803340 1205243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:26:39.809516 1205243 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:14 /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:26:39.809650 1205243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:26:39.816465 1205243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 13:26:39.829748 1205243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1169716.pem && ln -fs /usr/share/ca-certificates/1169716.pem /etc/ssl/certs/1169716.pem"
	I0407 13:26:39.842977 1205243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1169716.pem
	I0407 13:26:39.849190 1205243 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 12:22 /usr/share/ca-certificates/1169716.pem
	I0407 13:26:39.849271 1205243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1169716.pem
	I0407 13:26:39.856320 1205243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1169716.pem /etc/ssl/certs/51391683.0"
	I0407 13:26:39.868426 1205243 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 13:26:39.874176 1205243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0407 13:26:39.880970 1205243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0407 13:26:39.888401 1205243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0407 13:26:39.896512 1205243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0407 13:26:39.904284 1205243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0407 13:26:39.911565 1205243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0407 13:26:39.919198 1205243 kubeadm.go:392] StartCluster: {Name:test-preload-271062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-
271062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:26:39.919317 1205243 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0407 13:26:39.919375 1205243 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0407 13:26:39.959602 1205243 cri.go:89] found id: ""
	I0407 13:26:39.959703 1205243 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0407 13:26:39.971235 1205243 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0407 13:26:39.971280 1205243 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0407 13:26:39.971343 1205243 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0407 13:26:39.982657 1205243 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0407 13:26:39.983200 1205243 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-271062" does not appear in /home/jenkins/minikube-integration/20602-1162386/kubeconfig
	I0407 13:26:39.983366 1205243 kubeconfig.go:62] /home/jenkins/minikube-integration/20602-1162386/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-271062" cluster setting kubeconfig missing "test-preload-271062" context setting]
	I0407 13:26:39.983732 1205243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/kubeconfig: {Name:mk712863958f7dbf2601dd82dc9ca7bea42ef42f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:26:39.984506 1205243 kapi.go:59] client config for test-preload-271062: &rest.Config{Host:"https://192.168.39.95:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/test-preload-271062/client.crt", KeyFile:"/home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/test-preload-271062/client.key", CAFile:"/home/jenkins/minikube-integration/20602-1162386/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]u
int8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0407 13:26:39.985074 1205243 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0407 13:26:39.985100 1205243 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0407 13:26:39.985106 1205243 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0407 13:26:39.985112 1205243 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0407 13:26:39.985578 1205243 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0407 13:26:40.002579 1205243 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.95
	I0407 13:26:40.002632 1205243 kubeadm.go:1160] stopping kube-system containers ...
	I0407 13:26:40.002652 1205243 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0407 13:26:40.002729 1205243 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0407 13:26:40.040657 1205243 cri.go:89] found id: ""
	I0407 13:26:40.040753 1205243 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0407 13:26:40.057984 1205243 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 13:26:40.069852 1205243 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 13:26:40.069880 1205243 kubeadm.go:157] found existing configuration files:
	
	I0407 13:26:40.069930 1205243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 13:26:40.081154 1205243 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 13:26:40.081247 1205243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 13:26:40.091538 1205243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 13:26:40.103369 1205243 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 13:26:40.103446 1205243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 13:26:40.115729 1205243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 13:26:40.129462 1205243 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 13:26:40.129563 1205243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 13:26:40.142177 1205243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 13:26:40.155485 1205243 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 13:26:40.155572 1205243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 13:26:40.166858 1205243 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0407 13:26:40.178168 1205243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 13:26:40.307122 1205243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 13:26:40.940284 1205243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0407 13:26:41.217868 1205243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 13:26:41.290612 1205243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0407 13:26:41.367126 1205243 api_server.go:52] waiting for apiserver process to appear ...
	I0407 13:26:41.367226 1205243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:26:41.867380 1205243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:26:42.368355 1205243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:26:42.395708 1205243 api_server.go:72] duration metric: took 1.028572927s to wait for apiserver process to appear ...
	I0407 13:26:42.395776 1205243 api_server.go:88] waiting for apiserver healthz status ...
	I0407 13:26:42.395816 1205243 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8443/healthz ...
	I0407 13:26:42.396620 1205243 api_server.go:269] stopped: https://192.168.39.95:8443/healthz: Get "https://192.168.39.95:8443/healthz": dial tcp 192.168.39.95:8443: connect: connection refused
	I0407 13:26:42.896464 1205243 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8443/healthz ...
	I0407 13:26:42.897299 1205243 api_server.go:269] stopped: https://192.168.39.95:8443/healthz: Get "https://192.168.39.95:8443/healthz": dial tcp 192.168.39.95:8443: connect: connection refused
	I0407 13:26:43.396024 1205243 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8443/healthz ...
	I0407 13:26:46.679863 1205243 api_server.go:279] https://192.168.39.95:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0407 13:26:46.679900 1205243 api_server.go:103] status: https://192.168.39.95:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0407 13:26:46.679917 1205243 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8443/healthz ...
	I0407 13:26:46.707155 1205243 api_server.go:279] https://192.168.39.95:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0407 13:26:46.707187 1205243 api_server.go:103] status: https://192.168.39.95:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0407 13:26:46.896656 1205243 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8443/healthz ...
	I0407 13:26:46.904285 1205243 api_server.go:279] https://192.168.39.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0407 13:26:46.904323 1205243 api_server.go:103] status: https://192.168.39.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0407 13:26:47.396034 1205243 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8443/healthz ...
	I0407 13:26:47.410168 1205243 api_server.go:279] https://192.168.39.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0407 13:26:47.410222 1205243 api_server.go:103] status: https://192.168.39.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0407 13:26:47.895946 1205243 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8443/healthz ...
	I0407 13:26:47.905297 1205243 api_server.go:279] https://192.168.39.95:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0407 13:26:47.905355 1205243 api_server.go:103] status: https://192.168.39.95:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0407 13:26:48.396114 1205243 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8443/healthz ...
	I0407 13:26:48.403347 1205243 api_server.go:279] https://192.168.39.95:8443/healthz returned 200:
	ok
	I0407 13:26:48.413299 1205243 api_server.go:141] control plane version: v1.24.4
	I0407 13:26:48.413349 1205243 api_server.go:131] duration metric: took 6.017562904s to wait for apiserver health ...
	I0407 13:26:48.413363 1205243 cni.go:84] Creating CNI manager for ""
	I0407 13:26:48.413372 1205243 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 13:26:48.416367 1205243 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0407 13:26:48.418658 1205243 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0407 13:26:48.446522 1205243 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0407 13:26:48.478346 1205243 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 13:26:48.482791 1205243 system_pods.go:59] 7 kube-system pods found
	I0407 13:26:48.482845 1205243 system_pods.go:61] "coredns-6d4b75cb6d-wdb4j" [4ae9f760-b70d-4ff4-9970-c14943d244dd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0407 13:26:48.482859 1205243 system_pods.go:61] "etcd-test-preload-271062" [dae0a5b9-82b8-4fea-9c83-81460ed11d22] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0407 13:26:48.482868 1205243 system_pods.go:61] "kube-apiserver-test-preload-271062" [a43de856-cd99-4cb1-b94a-7e917bfabde1] Running
	I0407 13:26:48.482879 1205243 system_pods.go:61] "kube-controller-manager-test-preload-271062" [e6b67d44-7cf0-4ed1-91ed-4c85c1a58184] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0407 13:26:48.482886 1205243 system_pods.go:61] "kube-proxy-qg7pc" [f98c7d5e-7a3d-4fa6-af06-0ff774d44c61] Running
	I0407 13:26:48.482892 1205243 system_pods.go:61] "kube-scheduler-test-preload-271062" [c2d0b897-2a1f-4656-9696-6b1b5fa30dae] Running
	I0407 13:26:48.482900 1205243 system_pods.go:61] "storage-provisioner" [f7ac2922-9495-45c8-a013-84c2440685e0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0407 13:26:48.482909 1205243 system_pods.go:74] duration metric: took 4.532181ms to wait for pod list to return data ...
	I0407 13:26:48.482926 1205243 node_conditions.go:102] verifying NodePressure condition ...
	I0407 13:26:48.486491 1205243 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 13:26:48.486556 1205243 node_conditions.go:123] node cpu capacity is 2
	I0407 13:26:48.486574 1205243 node_conditions.go:105] duration metric: took 3.641175ms to run NodePressure ...
	I0407 13:26:48.486600 1205243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 13:26:48.897845 1205243 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0407 13:26:48.912221 1205243 kubeadm.go:739] kubelet initialised
	I0407 13:26:48.912251 1205243 kubeadm.go:740] duration metric: took 14.375723ms waiting for restarted kubelet to initialise ...
	I0407 13:26:48.912262 1205243 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 13:26:48.916838 1205243 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-wdb4j" in "kube-system" namespace to be "Ready" ...
	I0407 13:26:48.924697 1205243 pod_ready.go:98] node "test-preload-271062" hosting pod "coredns-6d4b75cb6d-wdb4j" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-271062" has status "Ready":"False"
	I0407 13:26:48.924725 1205243 pod_ready.go:82] duration metric: took 7.850792ms for pod "coredns-6d4b75cb6d-wdb4j" in "kube-system" namespace to be "Ready" ...
	E0407 13:26:48.924735 1205243 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-271062" hosting pod "coredns-6d4b75cb6d-wdb4j" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-271062" has status "Ready":"False"
	I0407 13:26:48.924742 1205243 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-271062" in "kube-system" namespace to be "Ready" ...
	I0407 13:26:48.937459 1205243 pod_ready.go:98] node "test-preload-271062" hosting pod "etcd-test-preload-271062" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-271062" has status "Ready":"False"
	I0407 13:26:48.937487 1205243 pod_ready.go:82] duration metric: took 12.734688ms for pod "etcd-test-preload-271062" in "kube-system" namespace to be "Ready" ...
	E0407 13:26:48.937498 1205243 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-271062" hosting pod "etcd-test-preload-271062" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-271062" has status "Ready":"False"
	I0407 13:26:48.937505 1205243 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-271062" in "kube-system" namespace to be "Ready" ...
	I0407 13:26:48.945623 1205243 pod_ready.go:98] node "test-preload-271062" hosting pod "kube-apiserver-test-preload-271062" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-271062" has status "Ready":"False"
	I0407 13:26:48.945654 1205243 pod_ready.go:82] duration metric: took 8.137343ms for pod "kube-apiserver-test-preload-271062" in "kube-system" namespace to be "Ready" ...
	E0407 13:26:48.945665 1205243 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-271062" hosting pod "kube-apiserver-test-preload-271062" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-271062" has status "Ready":"False"
	I0407 13:26:48.945673 1205243 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-271062" in "kube-system" namespace to be "Ready" ...
	I0407 13:26:48.959926 1205243 pod_ready.go:98] node "test-preload-271062" hosting pod "kube-controller-manager-test-preload-271062" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-271062" has status "Ready":"False"
	I0407 13:26:48.959956 1205243 pod_ready.go:82] duration metric: took 14.273672ms for pod "kube-controller-manager-test-preload-271062" in "kube-system" namespace to be "Ready" ...
	E0407 13:26:48.959968 1205243 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-271062" hosting pod "kube-controller-manager-test-preload-271062" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-271062" has status "Ready":"False"
	I0407 13:26:48.959989 1205243 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-qg7pc" in "kube-system" namespace to be "Ready" ...
	I0407 13:26:49.301415 1205243 pod_ready.go:98] node "test-preload-271062" hosting pod "kube-proxy-qg7pc" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-271062" has status "Ready":"False"
	I0407 13:26:49.301455 1205243 pod_ready.go:82] duration metric: took 341.454755ms for pod "kube-proxy-qg7pc" in "kube-system" namespace to be "Ready" ...
	E0407 13:26:49.301471 1205243 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-271062" hosting pod "kube-proxy-qg7pc" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-271062" has status "Ready":"False"
	I0407 13:26:49.301483 1205243 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-271062" in "kube-system" namespace to be "Ready" ...
	I0407 13:26:49.702768 1205243 pod_ready.go:98] node "test-preload-271062" hosting pod "kube-scheduler-test-preload-271062" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-271062" has status "Ready":"False"
	I0407 13:26:49.702812 1205243 pod_ready.go:82] duration metric: took 401.318517ms for pod "kube-scheduler-test-preload-271062" in "kube-system" namespace to be "Ready" ...
	E0407 13:26:49.702830 1205243 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-271062" hosting pod "kube-scheduler-test-preload-271062" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-271062" has status "Ready":"False"
	I0407 13:26:49.702856 1205243 pod_ready.go:39] duration metric: took 790.57082ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 13:26:49.702893 1205243 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0407 13:26:49.717681 1205243 ops.go:34] apiserver oom_adj: -16
	I0407 13:26:49.717727 1205243 kubeadm.go:597] duration metric: took 9.746438848s to restartPrimaryControlPlane
	I0407 13:26:49.717740 1205243 kubeadm.go:394] duration metric: took 9.798553899s to StartCluster
	I0407 13:26:49.717767 1205243 settings.go:142] acquiring lock: {Name:mk19c4dc5d7992642f3fe5ca0bdb3ea65af01b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:26:49.717871 1205243 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20602-1162386/kubeconfig
	I0407 13:26:49.718914 1205243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/kubeconfig: {Name:mk712863958f7dbf2601dd82dc9ca7bea42ef42f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:26:49.719276 1205243 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0407 13:26:49.719417 1205243 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0407 13:26:49.719505 1205243 config.go:182] Loaded profile config "test-preload-271062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0407 13:26:49.719523 1205243 addons.go:69] Setting storage-provisioner=true in profile "test-preload-271062"
	I0407 13:26:49.719551 1205243 addons.go:238] Setting addon storage-provisioner=true in "test-preload-271062"
	W0407 13:26:49.719566 1205243 addons.go:247] addon storage-provisioner should already be in state true
	I0407 13:26:49.719605 1205243 host.go:66] Checking if "test-preload-271062" exists ...
	I0407 13:26:49.719548 1205243 addons.go:69] Setting default-storageclass=true in profile "test-preload-271062"
	I0407 13:26:49.719778 1205243 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-271062"
	I0407 13:26:49.720052 1205243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:26:49.720115 1205243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:26:49.720289 1205243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:26:49.720355 1205243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:26:49.721408 1205243 out.go:177] * Verifying Kubernetes components...
	I0407 13:26:49.723237 1205243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:26:49.740086 1205243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42297
	I0407 13:26:49.740228 1205243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35485
	I0407 13:26:49.740818 1205243 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:26:49.740893 1205243 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:26:49.741473 1205243 main.go:141] libmachine: Using API Version  1
	I0407 13:26:49.741496 1205243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:26:49.741647 1205243 main.go:141] libmachine: Using API Version  1
	I0407 13:26:49.741677 1205243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:26:49.741946 1205243 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:26:49.742103 1205243 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:26:49.742155 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetState
	I0407 13:26:49.742713 1205243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:26:49.742768 1205243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:26:49.745207 1205243 kapi.go:59] client config for test-preload-271062: &rest.Config{Host:"https://192.168.39.95:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/test-preload-271062/client.crt", KeyFile:"/home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/test-preload-271062/client.key", CAFile:"/home/jenkins/minikube-integration/20602-1162386/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]u
int8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0407 13:26:49.745873 1205243 addons.go:238] Setting addon default-storageclass=true in "test-preload-271062"
	W0407 13:26:49.745903 1205243 addons.go:247] addon default-storageclass should already be in state true
	I0407 13:26:49.745944 1205243 host.go:66] Checking if "test-preload-271062" exists ...
	I0407 13:26:49.746275 1205243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:26:49.746359 1205243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:26:49.762446 1205243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43927
	I0407 13:26:49.763089 1205243 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:26:49.763744 1205243 main.go:141] libmachine: Using API Version  1
	I0407 13:26:49.763791 1205243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:26:49.764414 1205243 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:26:49.764694 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetState
	I0407 13:26:49.766408 1205243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43769
	I0407 13:26:49.767119 1205243 main.go:141] libmachine: (test-preload-271062) Calling .DriverName
	I0407 13:26:49.767208 1205243 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:26:49.767859 1205243 main.go:141] libmachine: Using API Version  1
	I0407 13:26:49.767904 1205243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:26:49.768455 1205243 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:26:49.769136 1205243 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 13:26:49.769172 1205243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:26:49.769243 1205243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:26:49.770577 1205243 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 13:26:49.770607 1205243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0407 13:26:49.770634 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHHostname
	I0407 13:26:49.775290 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:49.775902 1205243 main.go:141] libmachine: (test-preload-271062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8f:41", ip: ""} in network mk-test-preload-271062: {Iface:virbr1 ExpiryTime:2025-04-07 14:26:16 +0000 UTC Type:0 Mac:52:54:00:e5:8f:41 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:test-preload-271062 Clientid:01:52:54:00:e5:8f:41}
	I0407 13:26:49.775933 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined IP address 192.168.39.95 and MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:49.776204 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHPort
	I0407 13:26:49.776512 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHKeyPath
	I0407 13:26:49.776769 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHUsername
	I0407 13:26:49.777049 1205243 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/test-preload-271062/id_rsa Username:docker}
	I0407 13:26:49.808028 1205243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36017
	I0407 13:26:49.808770 1205243 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:26:49.809621 1205243 main.go:141] libmachine: Using API Version  1
	I0407 13:26:49.809670 1205243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:26:49.810351 1205243 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:26:49.810674 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetState
	I0407 13:26:49.813257 1205243 main.go:141] libmachine: (test-preload-271062) Calling .DriverName
	I0407 13:26:49.813609 1205243 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0407 13:26:49.813636 1205243 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0407 13:26:49.813667 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHHostname
	I0407 13:26:49.816817 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:49.817378 1205243 main.go:141] libmachine: (test-preload-271062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8f:41", ip: ""} in network mk-test-preload-271062: {Iface:virbr1 ExpiryTime:2025-04-07 14:26:16 +0000 UTC Type:0 Mac:52:54:00:e5:8f:41 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:test-preload-271062 Clientid:01:52:54:00:e5:8f:41}
	I0407 13:26:49.817416 1205243 main.go:141] libmachine: (test-preload-271062) DBG | domain test-preload-271062 has defined IP address 192.168.39.95 and MAC address 52:54:00:e5:8f:41 in network mk-test-preload-271062
	I0407 13:26:49.817727 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHPort
	I0407 13:26:49.818051 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHKeyPath
	I0407 13:26:49.818278 1205243 main.go:141] libmachine: (test-preload-271062) Calling .GetSSHUsername
	I0407 13:26:49.818529 1205243 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/test-preload-271062/id_rsa Username:docker}
	I0407 13:26:49.930865 1205243 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:26:49.951371 1205243 node_ready.go:35] waiting up to 6m0s for node "test-preload-271062" to be "Ready" ...
	I0407 13:26:50.038364 1205243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 13:26:50.090857 1205243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0407 13:26:51.039362 1205243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.000945784s)
	I0407 13:26:51.039440 1205243 main.go:141] libmachine: Making call to close driver server
	I0407 13:26:51.039454 1205243 main.go:141] libmachine: (test-preload-271062) Calling .Close
	I0407 13:26:51.039457 1205243 main.go:141] libmachine: Making call to close driver server
	I0407 13:26:51.039479 1205243 main.go:141] libmachine: (test-preload-271062) Calling .Close
	I0407 13:26:51.039810 1205243 main.go:141] libmachine: (test-preload-271062) DBG | Closing plugin on server side
	I0407 13:26:51.039831 1205243 main.go:141] libmachine: (test-preload-271062) DBG | Closing plugin on server side
	I0407 13:26:51.039819 1205243 main.go:141] libmachine: Successfully made call to close driver server
	I0407 13:26:51.039854 1205243 main.go:141] libmachine: Successfully made call to close driver server
	I0407 13:26:51.039860 1205243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 13:26:51.039869 1205243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 13:26:51.039872 1205243 main.go:141] libmachine: Making call to close driver server
	I0407 13:26:51.039881 1205243 main.go:141] libmachine: (test-preload-271062) Calling .Close
	I0407 13:26:51.039927 1205243 main.go:141] libmachine: Making call to close driver server
	I0407 13:26:51.039956 1205243 main.go:141] libmachine: (test-preload-271062) Calling .Close
	I0407 13:26:51.040203 1205243 main.go:141] libmachine: Successfully made call to close driver server
	I0407 13:26:51.040218 1205243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 13:26:51.040242 1205243 main.go:141] libmachine: Successfully made call to close driver server
	I0407 13:26:51.040254 1205243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 13:26:51.047109 1205243 main.go:141] libmachine: Making call to close driver server
	I0407 13:26:51.047144 1205243 main.go:141] libmachine: (test-preload-271062) Calling .Close
	I0407 13:26:51.047497 1205243 main.go:141] libmachine: Successfully made call to close driver server
	I0407 13:26:51.047520 1205243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 13:26:51.047538 1205243 main.go:141] libmachine: (test-preload-271062) DBG | Closing plugin on server side
	I0407 13:26:51.049865 1205243 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0407 13:26:51.051530 1205243 addons.go:514] duration metric: took 1.33211227s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0407 13:26:51.956999 1205243 node_ready.go:53] node "test-preload-271062" has status "Ready":"False"
	I0407 13:26:54.455682 1205243 node_ready.go:53] node "test-preload-271062" has status "Ready":"False"
	I0407 13:26:56.457105 1205243 node_ready.go:53] node "test-preload-271062" has status "Ready":"False"
	I0407 13:26:57.456505 1205243 node_ready.go:49] node "test-preload-271062" has status "Ready":"True"
	I0407 13:26:57.456543 1205243 node_ready.go:38] duration metric: took 7.505122763s for node "test-preload-271062" to be "Ready" ...
	I0407 13:26:57.456557 1205243 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 13:26:57.460390 1205243 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-wdb4j" in "kube-system" namespace to be "Ready" ...
	I0407 13:26:57.466913 1205243 pod_ready.go:93] pod "coredns-6d4b75cb6d-wdb4j" in "kube-system" namespace has status "Ready":"True"
	I0407 13:26:57.466944 1205243 pod_ready.go:82] duration metric: took 6.43711ms for pod "coredns-6d4b75cb6d-wdb4j" in "kube-system" namespace to be "Ready" ...
	I0407 13:26:57.466956 1205243 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-271062" in "kube-system" namespace to be "Ready" ...
	I0407 13:26:57.473010 1205243 pod_ready.go:93] pod "etcd-test-preload-271062" in "kube-system" namespace has status "Ready":"True"
	I0407 13:26:57.473042 1205243 pod_ready.go:82] duration metric: took 6.078175ms for pod "etcd-test-preload-271062" in "kube-system" namespace to be "Ready" ...
	I0407 13:26:57.473056 1205243 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-271062" in "kube-system" namespace to be "Ready" ...
	I0407 13:26:57.478983 1205243 pod_ready.go:93] pod "kube-apiserver-test-preload-271062" in "kube-system" namespace has status "Ready":"True"
	I0407 13:26:57.479011 1205243 pod_ready.go:82] duration metric: took 5.946209ms for pod "kube-apiserver-test-preload-271062" in "kube-system" namespace to be "Ready" ...
	I0407 13:26:57.479023 1205243 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-271062" in "kube-system" namespace to be "Ready" ...
	I0407 13:26:59.487635 1205243 pod_ready.go:103] pod "kube-controller-manager-test-preload-271062" in "kube-system" namespace has status "Ready":"False"
	I0407 13:27:00.988160 1205243 pod_ready.go:93] pod "kube-controller-manager-test-preload-271062" in "kube-system" namespace has status "Ready":"True"
	I0407 13:27:00.988198 1205243 pod_ready.go:82] duration metric: took 3.509166763s for pod "kube-controller-manager-test-preload-271062" in "kube-system" namespace to be "Ready" ...
	I0407 13:27:00.988217 1205243 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qg7pc" in "kube-system" namespace to be "Ready" ...
	I0407 13:27:00.997453 1205243 pod_ready.go:93] pod "kube-proxy-qg7pc" in "kube-system" namespace has status "Ready":"True"
	I0407 13:27:00.997494 1205243 pod_ready.go:82] duration metric: took 9.267054ms for pod "kube-proxy-qg7pc" in "kube-system" namespace to be "Ready" ...
	I0407 13:27:00.997509 1205243 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-271062" in "kube-system" namespace to be "Ready" ...
	I0407 13:27:01.056715 1205243 pod_ready.go:93] pod "kube-scheduler-test-preload-271062" in "kube-system" namespace has status "Ready":"True"
	I0407 13:27:01.056751 1205243 pod_ready.go:82] duration metric: took 59.232093ms for pod "kube-scheduler-test-preload-271062" in "kube-system" namespace to be "Ready" ...
	I0407 13:27:01.056768 1205243 pod_ready.go:39] duration metric: took 3.600196189s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 13:27:01.056790 1205243 api_server.go:52] waiting for apiserver process to appear ...
	I0407 13:27:01.056872 1205243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:27:01.075880 1205243 api_server.go:72] duration metric: took 11.356561175s to wait for apiserver process to appear ...
	I0407 13:27:01.075918 1205243 api_server.go:88] waiting for apiserver healthz status ...
	I0407 13:27:01.075976 1205243 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8443/healthz ...
	I0407 13:27:01.084958 1205243 api_server.go:279] https://192.168.39.95:8443/healthz returned 200:
	ok
	I0407 13:27:01.086500 1205243 api_server.go:141] control plane version: v1.24.4
	I0407 13:27:01.086528 1205243 api_server.go:131] duration metric: took 10.601054ms to wait for apiserver health ...
	I0407 13:27:01.086540 1205243 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 13:27:01.258084 1205243 system_pods.go:59] 7 kube-system pods found
	I0407 13:27:01.258126 1205243 system_pods.go:61] "coredns-6d4b75cb6d-wdb4j" [4ae9f760-b70d-4ff4-9970-c14943d244dd] Running
	I0407 13:27:01.258135 1205243 system_pods.go:61] "etcd-test-preload-271062" [dae0a5b9-82b8-4fea-9c83-81460ed11d22] Running
	I0407 13:27:01.258141 1205243 system_pods.go:61] "kube-apiserver-test-preload-271062" [a43de856-cd99-4cb1-b94a-7e917bfabde1] Running
	I0407 13:27:01.258147 1205243 system_pods.go:61] "kube-controller-manager-test-preload-271062" [e6b67d44-7cf0-4ed1-91ed-4c85c1a58184] Running
	I0407 13:27:01.258151 1205243 system_pods.go:61] "kube-proxy-qg7pc" [f98c7d5e-7a3d-4fa6-af06-0ff774d44c61] Running
	I0407 13:27:01.258161 1205243 system_pods.go:61] "kube-scheduler-test-preload-271062" [c2d0b897-2a1f-4656-9696-6b1b5fa30dae] Running
	I0407 13:27:01.258170 1205243 system_pods.go:61] "storage-provisioner" [f7ac2922-9495-45c8-a013-84c2440685e0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0407 13:27:01.258181 1205243 system_pods.go:74] duration metric: took 171.63217ms to wait for pod list to return data ...
	I0407 13:27:01.258201 1205243 default_sa.go:34] waiting for default service account to be created ...
	I0407 13:27:01.456286 1205243 default_sa.go:45] found service account: "default"
	I0407 13:27:01.456335 1205243 default_sa.go:55] duration metric: took 198.106715ms for default service account to be created ...
	I0407 13:27:01.456348 1205243 system_pods.go:116] waiting for k8s-apps to be running ...
	I0407 13:27:01.658421 1205243 system_pods.go:86] 7 kube-system pods found
	I0407 13:27:01.658455 1205243 system_pods.go:89] "coredns-6d4b75cb6d-wdb4j" [4ae9f760-b70d-4ff4-9970-c14943d244dd] Running
	I0407 13:27:01.658462 1205243 system_pods.go:89] "etcd-test-preload-271062" [dae0a5b9-82b8-4fea-9c83-81460ed11d22] Running
	I0407 13:27:01.658466 1205243 system_pods.go:89] "kube-apiserver-test-preload-271062" [a43de856-cd99-4cb1-b94a-7e917bfabde1] Running
	I0407 13:27:01.658469 1205243 system_pods.go:89] "kube-controller-manager-test-preload-271062" [e6b67d44-7cf0-4ed1-91ed-4c85c1a58184] Running
	I0407 13:27:01.658473 1205243 system_pods.go:89] "kube-proxy-qg7pc" [f98c7d5e-7a3d-4fa6-af06-0ff774d44c61] Running
	I0407 13:27:01.658476 1205243 system_pods.go:89] "kube-scheduler-test-preload-271062" [c2d0b897-2a1f-4656-9696-6b1b5fa30dae] Running
	I0407 13:27:01.658482 1205243 system_pods.go:89] "storage-provisioner" [f7ac2922-9495-45c8-a013-84c2440685e0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0407 13:27:01.658492 1205243 system_pods.go:126] duration metric: took 202.136857ms to wait for k8s-apps to be running ...
	I0407 13:27:01.658503 1205243 system_svc.go:44] waiting for kubelet service to be running ....
	I0407 13:27:01.658550 1205243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:27:01.675674 1205243 system_svc.go:56] duration metric: took 17.159014ms WaitForService to wait for kubelet
	I0407 13:27:01.675712 1205243 kubeadm.go:582] duration metric: took 11.956402816s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 13:27:01.675732 1205243 node_conditions.go:102] verifying NodePressure condition ...
	I0407 13:27:01.857304 1205243 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 13:27:01.857338 1205243 node_conditions.go:123] node cpu capacity is 2
	I0407 13:27:01.857349 1205243 node_conditions.go:105] duration metric: took 181.612313ms to run NodePressure ...
	I0407 13:27:01.857362 1205243 start.go:241] waiting for startup goroutines ...
	I0407 13:27:01.857369 1205243 start.go:246] waiting for cluster config update ...
	I0407 13:27:01.857380 1205243 start.go:255] writing updated cluster config ...
	I0407 13:27:01.857682 1205243 ssh_runner.go:195] Run: rm -f paused
	I0407 13:27:01.914322 1205243 start.go:600] kubectl: 1.32.3, cluster: 1.24.4 (minor skew: 8)
	I0407 13:27:01.916560 1205243 out.go:201] 
	W0407 13:27:01.918413 1205243 out.go:270] ! /usr/local/bin/kubectl is version 1.32.3, which may have incompatibilities with Kubernetes 1.24.4.
	I0407 13:27:01.920252 1205243 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0407 13:27:01.922046 1205243 out.go:177] * Done! kubectl is now configured to use "test-preload-271062" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 07 13:27:03 test-preload-271062 crio[664]: time="2025-04-07 13:27:03.014188714Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744032423014164074,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f828c083-0090-4861-a7ee-575585f4b1fb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:27:03 test-preload-271062 crio[664]: time="2025-04-07 13:27:03.015067393Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e4aa56ba-b49f-4ac0-aa7c-3a4640ec0eed name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:27:03 test-preload-271062 crio[664]: time="2025-04-07 13:27:03.015127118Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e4aa56ba-b49f-4ac0-aa7c-3a4640ec0eed name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:27:03 test-preload-271062 crio[664]: time="2025-04-07 13:27:03.015299883Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d5c4ffcaa722f89a876567240c1567de5a9779b825b59456e1f9a39a1ab93a9e,PodSandboxId:d17566705983a5767597e258a601246c6a9330b51f79fa50cefc5993ba69b387,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1744032415465899192,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-wdb4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ae9f760-b70d-4ff4-9970-c14943d244dd,},Annotations:map[string]string{io.kubernetes.container.hash: c3b98f3a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e88dba2f36e087e90c3ffba259550940c635ac3421a91f614003937cce602220,PodSandboxId:892205dcc0e01f545cbf643367a6263bde4c38a2aee97e8f60447fc34286bfd9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1744032408561029109,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: f7ac2922-9495-45c8-a013-84c2440685e0,},Annotations:map[string]string{io.kubernetes.container.hash: fb807027,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f1f2b54edc7b5b1722dd59e51d02f8bd45d852a9948021c479ef1f7c4397d37,PodSandboxId:eb77af93845408aad1adb88f72b68cf8dd67732dfc0269440229244cec8698f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1744032408388414197,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qg7pc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f98
c7d5e-7a3d-4fa6-af06-0ff774d44c61,},Annotations:map[string]string{io.kubernetes.container.hash: 37ea5102,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3eb62c313b25caeb5b04e483e5dec6ac8b064065d547c220556abeb6bb5ba59,PodSandboxId:18adbe500314f012b8522eefaff5efb26707b2cdf98be283984d32d9ab063be4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1744032402242256650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-271062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4590a1608a3bfed3d03bcaf5f3b4510,},Annot
ations:map[string]string{io.kubernetes.container.hash: 2d5fa692,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72604ea7454ff2d8a772b0317ce6eef1d91b988874b085670874d756530ed295,PodSandboxId:769453119ffe251b98e2e58dd44641888bd6e6e7d82a20cc3fa69a93e67eaa70,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1744032402198524068,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-271062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd7b6a88d61d411256d3d193
dac7e00a,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79f60ac8a71648106924fd9bda3c85b33b6ee20a81b9c151bf92b18ec3c1e0e6,PodSandboxId:df9a44d2fd73b4d8c52a17ef0a29c487d6fee577d7f1257fa5ece9d755fdfe00,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1744032402152482116,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-271062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fcd048c4d7905e48815569eaec6f119,},
Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c62b83f3baca12c9fc45803d58de6c092e3d7ee5b70711de96148fade583ee2,PodSandboxId:5d4179f426c8f52d97011f6d8f1e5bfd92fae4ea715d7f2ff4395c3a8df4c425,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1744032402105293162,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-271062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c317c4234e4407c55e1ec406ad930620,},Annotations
:map[string]string{io.kubernetes.container.hash: 8bbfaed3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e4aa56ba-b49f-4ac0-aa7c-3a4640ec0eed name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:27:03 test-preload-271062 crio[664]: time="2025-04-07 13:27:03.057765425Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d8dbc295-9ee1-402f-82fa-4a9d230a039d name=/runtime.v1.RuntimeService/Version
	Apr 07 13:27:03 test-preload-271062 crio[664]: time="2025-04-07 13:27:03.057861832Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d8dbc295-9ee1-402f-82fa-4a9d230a039d name=/runtime.v1.RuntimeService/Version
	Apr 07 13:27:03 test-preload-271062 crio[664]: time="2025-04-07 13:27:03.058953638Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4988c0ff-62b0-4767-973d-608df24347fc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:27:03 test-preload-271062 crio[664]: time="2025-04-07 13:27:03.059412513Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744032423059389175,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4988c0ff-62b0-4767-973d-608df24347fc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:27:03 test-preload-271062 crio[664]: time="2025-04-07 13:27:03.060167986Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=451d8174-a26d-4789-9728-5872aeac9fa5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:27:03 test-preload-271062 crio[664]: time="2025-04-07 13:27:03.060225951Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=451d8174-a26d-4789-9728-5872aeac9fa5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:27:03 test-preload-271062 crio[664]: time="2025-04-07 13:27:03.060393990Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d5c4ffcaa722f89a876567240c1567de5a9779b825b59456e1f9a39a1ab93a9e,PodSandboxId:d17566705983a5767597e258a601246c6a9330b51f79fa50cefc5993ba69b387,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1744032415465899192,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-wdb4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ae9f760-b70d-4ff4-9970-c14943d244dd,},Annotations:map[string]string{io.kubernetes.container.hash: c3b98f3a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e88dba2f36e087e90c3ffba259550940c635ac3421a91f614003937cce602220,PodSandboxId:892205dcc0e01f545cbf643367a6263bde4c38a2aee97e8f60447fc34286bfd9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1744032408561029109,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: f7ac2922-9495-45c8-a013-84c2440685e0,},Annotations:map[string]string{io.kubernetes.container.hash: fb807027,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f1f2b54edc7b5b1722dd59e51d02f8bd45d852a9948021c479ef1f7c4397d37,PodSandboxId:eb77af93845408aad1adb88f72b68cf8dd67732dfc0269440229244cec8698f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1744032408388414197,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qg7pc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f98
c7d5e-7a3d-4fa6-af06-0ff774d44c61,},Annotations:map[string]string{io.kubernetes.container.hash: 37ea5102,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3eb62c313b25caeb5b04e483e5dec6ac8b064065d547c220556abeb6bb5ba59,PodSandboxId:18adbe500314f012b8522eefaff5efb26707b2cdf98be283984d32d9ab063be4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1744032402242256650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-271062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4590a1608a3bfed3d03bcaf5f3b4510,},Annot
ations:map[string]string{io.kubernetes.container.hash: 2d5fa692,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72604ea7454ff2d8a772b0317ce6eef1d91b988874b085670874d756530ed295,PodSandboxId:769453119ffe251b98e2e58dd44641888bd6e6e7d82a20cc3fa69a93e67eaa70,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1744032402198524068,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-271062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd7b6a88d61d411256d3d193
dac7e00a,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79f60ac8a71648106924fd9bda3c85b33b6ee20a81b9c151bf92b18ec3c1e0e6,PodSandboxId:df9a44d2fd73b4d8c52a17ef0a29c487d6fee577d7f1257fa5ece9d755fdfe00,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1744032402152482116,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-271062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fcd048c4d7905e48815569eaec6f119,},
Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c62b83f3baca12c9fc45803d58de6c092e3d7ee5b70711de96148fade583ee2,PodSandboxId:5d4179f426c8f52d97011f6d8f1e5bfd92fae4ea715d7f2ff4395c3a8df4c425,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1744032402105293162,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-271062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c317c4234e4407c55e1ec406ad930620,},Annotations
:map[string]string{io.kubernetes.container.hash: 8bbfaed3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=451d8174-a26d-4789-9728-5872aeac9fa5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:27:03 test-preload-271062 crio[664]: time="2025-04-07 13:27:03.109807816Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=196dc46e-1b72-43f4-9332-0820c576ae30 name=/runtime.v1.RuntimeService/Version
	Apr 07 13:27:03 test-preload-271062 crio[664]: time="2025-04-07 13:27:03.109883682Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=196dc46e-1b72-43f4-9332-0820c576ae30 name=/runtime.v1.RuntimeService/Version
	Apr 07 13:27:03 test-preload-271062 crio[664]: time="2025-04-07 13:27:03.111575732Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aa9a5384-4c44-4137-b9ed-1aa7241973a4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:27:03 test-preload-271062 crio[664]: time="2025-04-07 13:27:03.112461504Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744032423112425541,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aa9a5384-4c44-4137-b9ed-1aa7241973a4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:27:03 test-preload-271062 crio[664]: time="2025-04-07 13:27:03.113408897Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=43d5b55b-513d-4606-9079-d3728a64350e name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:27:03 test-preload-271062 crio[664]: time="2025-04-07 13:27:03.113480160Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=43d5b55b-513d-4606-9079-d3728a64350e name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:27:03 test-preload-271062 crio[664]: time="2025-04-07 13:27:03.113686460Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d5c4ffcaa722f89a876567240c1567de5a9779b825b59456e1f9a39a1ab93a9e,PodSandboxId:d17566705983a5767597e258a601246c6a9330b51f79fa50cefc5993ba69b387,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1744032415465899192,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-wdb4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ae9f760-b70d-4ff4-9970-c14943d244dd,},Annotations:map[string]string{io.kubernetes.container.hash: c3b98f3a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e88dba2f36e087e90c3ffba259550940c635ac3421a91f614003937cce602220,PodSandboxId:892205dcc0e01f545cbf643367a6263bde4c38a2aee97e8f60447fc34286bfd9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1744032408561029109,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: f7ac2922-9495-45c8-a013-84c2440685e0,},Annotations:map[string]string{io.kubernetes.container.hash: fb807027,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f1f2b54edc7b5b1722dd59e51d02f8bd45d852a9948021c479ef1f7c4397d37,PodSandboxId:eb77af93845408aad1adb88f72b68cf8dd67732dfc0269440229244cec8698f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1744032408388414197,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qg7pc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f98
c7d5e-7a3d-4fa6-af06-0ff774d44c61,},Annotations:map[string]string{io.kubernetes.container.hash: 37ea5102,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3eb62c313b25caeb5b04e483e5dec6ac8b064065d547c220556abeb6bb5ba59,PodSandboxId:18adbe500314f012b8522eefaff5efb26707b2cdf98be283984d32d9ab063be4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1744032402242256650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-271062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4590a1608a3bfed3d03bcaf5f3b4510,},Annot
ations:map[string]string{io.kubernetes.container.hash: 2d5fa692,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72604ea7454ff2d8a772b0317ce6eef1d91b988874b085670874d756530ed295,PodSandboxId:769453119ffe251b98e2e58dd44641888bd6e6e7d82a20cc3fa69a93e67eaa70,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1744032402198524068,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-271062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd7b6a88d61d411256d3d193
dac7e00a,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79f60ac8a71648106924fd9bda3c85b33b6ee20a81b9c151bf92b18ec3c1e0e6,PodSandboxId:df9a44d2fd73b4d8c52a17ef0a29c487d6fee577d7f1257fa5ece9d755fdfe00,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1744032402152482116,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-271062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fcd048c4d7905e48815569eaec6f119,},
Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c62b83f3baca12c9fc45803d58de6c092e3d7ee5b70711de96148fade583ee2,PodSandboxId:5d4179f426c8f52d97011f6d8f1e5bfd92fae4ea715d7f2ff4395c3a8df4c425,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1744032402105293162,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-271062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c317c4234e4407c55e1ec406ad930620,},Annotations
:map[string]string{io.kubernetes.container.hash: 8bbfaed3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=43d5b55b-513d-4606-9079-d3728a64350e name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:27:03 test-preload-271062 crio[664]: time="2025-04-07 13:27:03.153179538Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=181ab4ad-474f-43a0-91b9-a66413baa27f name=/runtime.v1.RuntimeService/Version
	Apr 07 13:27:03 test-preload-271062 crio[664]: time="2025-04-07 13:27:03.153279034Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=181ab4ad-474f-43a0-91b9-a66413baa27f name=/runtime.v1.RuntimeService/Version
	Apr 07 13:27:03 test-preload-271062 crio[664]: time="2025-04-07 13:27:03.154763964Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=23712bdf-d720-410c-b2fd-e4cd64f2adf4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:27:03 test-preload-271062 crio[664]: time="2025-04-07 13:27:03.155227157Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744032423155202678,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=23712bdf-d720-410c-b2fd-e4cd64f2adf4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:27:03 test-preload-271062 crio[664]: time="2025-04-07 13:27:03.156076136Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ca45b01-13a6-48a5-a8d8-8ef9452b74fb name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:27:03 test-preload-271062 crio[664]: time="2025-04-07 13:27:03.156140323Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ca45b01-13a6-48a5-a8d8-8ef9452b74fb name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:27:03 test-preload-271062 crio[664]: time="2025-04-07 13:27:03.156326968Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d5c4ffcaa722f89a876567240c1567de5a9779b825b59456e1f9a39a1ab93a9e,PodSandboxId:d17566705983a5767597e258a601246c6a9330b51f79fa50cefc5993ba69b387,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1744032415465899192,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-wdb4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ae9f760-b70d-4ff4-9970-c14943d244dd,},Annotations:map[string]string{io.kubernetes.container.hash: c3b98f3a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e88dba2f36e087e90c3ffba259550940c635ac3421a91f614003937cce602220,PodSandboxId:892205dcc0e01f545cbf643367a6263bde4c38a2aee97e8f60447fc34286bfd9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1744032408561029109,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: f7ac2922-9495-45c8-a013-84c2440685e0,},Annotations:map[string]string{io.kubernetes.container.hash: fb807027,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f1f2b54edc7b5b1722dd59e51d02f8bd45d852a9948021c479ef1f7c4397d37,PodSandboxId:eb77af93845408aad1adb88f72b68cf8dd67732dfc0269440229244cec8698f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1744032408388414197,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qg7pc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f98
c7d5e-7a3d-4fa6-af06-0ff774d44c61,},Annotations:map[string]string{io.kubernetes.container.hash: 37ea5102,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3eb62c313b25caeb5b04e483e5dec6ac8b064065d547c220556abeb6bb5ba59,PodSandboxId:18adbe500314f012b8522eefaff5efb26707b2cdf98be283984d32d9ab063be4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1744032402242256650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-271062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4590a1608a3bfed3d03bcaf5f3b4510,},Annot
ations:map[string]string{io.kubernetes.container.hash: 2d5fa692,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72604ea7454ff2d8a772b0317ce6eef1d91b988874b085670874d756530ed295,PodSandboxId:769453119ffe251b98e2e58dd44641888bd6e6e7d82a20cc3fa69a93e67eaa70,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1744032402198524068,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-271062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd7b6a88d61d411256d3d193
dac7e00a,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79f60ac8a71648106924fd9bda3c85b33b6ee20a81b9c151bf92b18ec3c1e0e6,PodSandboxId:df9a44d2fd73b4d8c52a17ef0a29c487d6fee577d7f1257fa5ece9d755fdfe00,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1744032402152482116,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-271062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fcd048c4d7905e48815569eaec6f119,},
Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c62b83f3baca12c9fc45803d58de6c092e3d7ee5b70711de96148fade583ee2,PodSandboxId:5d4179f426c8f52d97011f6d8f1e5bfd92fae4ea715d7f2ff4395c3a8df4c425,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1744032402105293162,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-271062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c317c4234e4407c55e1ec406ad930620,},Annotations
:map[string]string{io.kubernetes.container.hash: 8bbfaed3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7ca45b01-13a6-48a5-a8d8-8ef9452b74fb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d5c4ffcaa722f       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   7 seconds ago       Running             coredns                   1                   d17566705983a       coredns-6d4b75cb6d-wdb4j
	e88dba2f36e08       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Exited              storage-provisioner       2                   892205dcc0e01       storage-provisioner
	4f1f2b54edc7b       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   14 seconds ago      Running             kube-proxy                1                   eb77af9384540       kube-proxy-qg7pc
	c3eb62c313b25       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   18adbe500314f       etcd-test-preload-271062
	72604ea7454ff       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   21 seconds ago      Running             kube-controller-manager   1                   769453119ffe2       kube-controller-manager-test-preload-271062
	79f60ac8a7164       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago      Running             kube-scheduler            1                   df9a44d2fd73b       kube-scheduler-test-preload-271062
	5c62b83f3baca       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago      Running             kube-apiserver            1                   5d4179f426c8f       kube-apiserver-test-preload-271062
	
	
	==> coredns [d5c4ffcaa722f89a876567240c1567de5a9779b825b59456e1f9a39a1ab93a9e] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:55736 - 6084 "HINFO IN 4800333887732075792.1669924618888878033. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022259479s
	
	
	==> describe nodes <==
	Name:               test-preload-271062
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-271062
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=33e6edc58d2014d70e908473920ef4ac8eae1e43
	                    minikube.k8s.io/name=test-preload-271062
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_07T13_25_31_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Apr 2025 13:25:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-271062
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Apr 2025 13:26:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Apr 2025 13:26:56 +0000   Mon, 07 Apr 2025 13:25:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Apr 2025 13:26:56 +0000   Mon, 07 Apr 2025 13:25:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Apr 2025 13:26:56 +0000   Mon, 07 Apr 2025 13:25:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Apr 2025 13:26:56 +0000   Mon, 07 Apr 2025 13:26:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.95
	  Hostname:    test-preload-271062
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c90ee8a05a4b4b6ba0ba01e3044e5f69
	  System UUID:                c90ee8a0-5a4b-4b6b-a0ba-01e3044e5f69
	  Boot ID:                    fdb729a5-e95b-4855-b4b2-4bbc5e64866c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-wdb4j                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     80s
	  kube-system                 etcd-test-preload-271062                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         94s
	  kube-system                 kube-apiserver-test-preload-271062             250m (12%)    0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 kube-controller-manager-test-preload-271062    200m (10%)    0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-proxy-qg7pc                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-scheduler-test-preload-271062             100m (5%)     0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14s                kube-proxy       
	  Normal  Starting                 78s                kube-proxy       
	  Normal  Starting                 92s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  92s                kubelet          Node test-preload-271062 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    92s                kubelet          Node test-preload-271062 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     92s                kubelet          Node test-preload-271062 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  92s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                82s                kubelet          Node test-preload-271062 status is now: NodeReady
	  Normal  RegisteredNode           80s                node-controller  Node test-preload-271062 event: Registered Node test-preload-271062 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node test-preload-271062 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node test-preload-271062 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node test-preload-271062 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4s                 node-controller  Node test-preload-271062 event: Registered Node test-preload-271062 in Controller
	
	
	==> dmesg <==
	[Apr 7 13:26] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050226] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038552] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.977799] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.180985] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.373296] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.699964] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.057598] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055771] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.175116] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.150138] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.309027] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[ +12.779198] systemd-fstab-generator[984]: Ignoring "noauto" option for root device
	[  +0.067746] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.862764] systemd-fstab-generator[1114]: Ignoring "noauto" option for root device
	[  +6.924041] kauditd_printk_skb: 105 callbacks suppressed
	[  +1.763973] systemd-fstab-generator[1847]: Ignoring "noauto" option for root device
	[  +5.425047] kauditd_printk_skb: 59 callbacks suppressed
	
	
	==> etcd [c3eb62c313b25caeb5b04e483e5dec6ac8b064065d547c220556abeb6bb5ba59] <==
	{"level":"info","ts":"2025-04-07T13:26:42.606Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"a71e7bac075997","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-04-07T13:26:42.606Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-04-07T13:26:42.607Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a71e7bac075997 switched to configuration voters=(47039837626653079)"}
	{"level":"info","ts":"2025-04-07T13:26:42.611Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"986e33f48d4d13ba","local-member-id":"a71e7bac075997","added-peer-id":"a71e7bac075997","added-peer-peer-urls":["https://192.168.39.95:2380"]}
	{"level":"info","ts":"2025-04-07T13:26:42.613Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"986e33f48d4d13ba","local-member-id":"a71e7bac075997","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T13:26:42.615Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T13:26:42.631Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-07T13:26:42.634Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"a71e7bac075997","initial-advertise-peer-urls":["https://192.168.39.95:2380"],"listen-peer-urls":["https://192.168.39.95:2380"],"advertise-client-urls":["https://192.168.39.95:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.95:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-07T13:26:42.634Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-07T13:26:42.633Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.95:2380"}
	{"level":"info","ts":"2025-04-07T13:26:42.634Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.95:2380"}
	{"level":"info","ts":"2025-04-07T13:26:44.053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a71e7bac075997 is starting a new election at term 2"}
	{"level":"info","ts":"2025-04-07T13:26:44.053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a71e7bac075997 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-04-07T13:26:44.053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a71e7bac075997 received MsgPreVoteResp from a71e7bac075997 at term 2"}
	{"level":"info","ts":"2025-04-07T13:26:44.053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a71e7bac075997 became candidate at term 3"}
	{"level":"info","ts":"2025-04-07T13:26:44.053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a71e7bac075997 received MsgVoteResp from a71e7bac075997 at term 3"}
	{"level":"info","ts":"2025-04-07T13:26:44.053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a71e7bac075997 became leader at term 3"}
	{"level":"info","ts":"2025-04-07T13:26:44.053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a71e7bac075997 elected leader a71e7bac075997 at term 3"}
	{"level":"info","ts":"2025-04-07T13:26:44.060Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"a71e7bac075997","local-member-attributes":"{Name:test-preload-271062 ClientURLs:[https://192.168.39.95:2379]}","request-path":"/0/members/a71e7bac075997/attributes","cluster-id":"986e33f48d4d13ba","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-07T13:26:44.060Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-07T13:26:44.062Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.95:2379"}
	{"level":"info","ts":"2025-04-07T13:26:44.062Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-07T13:26:44.064Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-07T13:26:44.064Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-07T13:26:44.064Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 13:27:03 up 0 min,  0 users,  load average: 1.04, 0.28, 0.09
	Linux test-preload-271062 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5c62b83f3baca12c9fc45803d58de6c092e3d7ee5b70711de96148fade583ee2] <==
	I0407 13:26:46.629110       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0407 13:26:46.629121       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0407 13:26:46.642321       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0407 13:26:46.642359       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0407 13:26:46.665904       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0407 13:26:46.685482       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0407 13:26:46.762542       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0407 13:26:46.822857       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0407 13:26:46.825532       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0407 13:26:46.826491       1 cache.go:39] Caches are synced for autoregister controller
	I0407 13:26:46.826720       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0407 13:26:46.838293       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0407 13:26:46.840426       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0407 13:26:46.842154       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0407 13:26:46.842429       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0407 13:26:47.280825       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0407 13:26:47.638600       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0407 13:26:48.709689       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0407 13:26:48.736020       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0407 13:26:48.822507       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0407 13:26:48.858538       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0407 13:26:48.872942       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0407 13:26:49.023314       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0407 13:26:59.368279       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0407 13:26:59.517078       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [72604ea7454ff2d8a772b0317ce6eef1d91b988874b085670874d756530ed295] <==
	I0407 13:26:59.364146       1 shared_informer.go:262] Caches are synced for endpoint
	I0407 13:26:59.365253       1 shared_informer.go:262] Caches are synced for job
	I0407 13:26:59.374548       1 shared_informer.go:262] Caches are synced for HPA
	I0407 13:26:59.385184       1 shared_informer.go:262] Caches are synced for node
	I0407 13:26:59.385247       1 range_allocator.go:173] Starting range CIDR allocator
	I0407 13:26:59.385253       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0407 13:26:59.385269       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0407 13:26:59.386530       1 shared_informer.go:262] Caches are synced for cronjob
	I0407 13:26:59.388973       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0407 13:26:59.390170       1 shared_informer.go:262] Caches are synced for ephemeral
	I0407 13:26:59.391404       1 shared_informer.go:262] Caches are synced for deployment
	I0407 13:26:59.395786       1 shared_informer.go:262] Caches are synced for persistent volume
	I0407 13:26:59.398256       1 shared_informer.go:262] Caches are synced for PV protection
	I0407 13:26:59.398376       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0407 13:26:59.403731       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0407 13:26:59.407172       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0407 13:26:59.412785       1 shared_informer.go:262] Caches are synced for crt configmap
	I0407 13:26:59.466302       1 shared_informer.go:262] Caches are synced for attach detach
	I0407 13:26:59.483536       1 shared_informer.go:262] Caches are synced for disruption
	I0407 13:26:59.483572       1 disruption.go:371] Sending events to api server.
	I0407 13:26:59.567923       1 shared_informer.go:262] Caches are synced for resource quota
	I0407 13:26:59.585191       1 shared_informer.go:262] Caches are synced for resource quota
	I0407 13:27:00.033478       1 shared_informer.go:262] Caches are synced for garbage collector
	I0407 13:27:00.039257       1 shared_informer.go:262] Caches are synced for garbage collector
	I0407 13:27:00.039369       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [4f1f2b54edc7b5b1722dd59e51d02f8bd45d852a9948021c479ef1f7c4397d37] <==
	I0407 13:26:48.949781       1 node.go:163] Successfully retrieved node IP: 192.168.39.95
	I0407 13:26:48.949870       1 server_others.go:138] "Detected node IP" address="192.168.39.95"
	I0407 13:26:48.949905       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0407 13:26:49.009314       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0407 13:26:49.009337       1 server_others.go:206] "Using iptables Proxier"
	I0407 13:26:49.010978       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0407 13:26:49.011373       1 server.go:661] "Version info" version="v1.24.4"
	I0407 13:26:49.012670       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0407 13:26:49.016244       1 config.go:317] "Starting service config controller"
	I0407 13:26:49.016680       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0407 13:26:49.016760       1 config.go:226] "Starting endpoint slice config controller"
	I0407 13:26:49.016780       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0407 13:26:49.017865       1 config.go:444] "Starting node config controller"
	I0407 13:26:49.019220       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0407 13:26:49.116833       1 shared_informer.go:262] Caches are synced for service config
	I0407 13:26:49.117062       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0407 13:26:49.120300       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [79f60ac8a71648106924fd9bda3c85b33b6ee20a81b9c151bf92b18ec3c1e0e6] <==
	I0407 13:26:42.755875       1 serving.go:348] Generated self-signed cert in-memory
	W0407 13:26:46.719710       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0407 13:26:46.719829       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0407 13:26:46.719897       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0407 13:26:46.719908       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0407 13:26:46.771168       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0407 13:26:46.771216       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0407 13:26:46.775340       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0407 13:26:46.775838       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0407 13:26:46.778094       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0407 13:26:46.775890       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0407 13:26:46.878835       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 07 13:26:47 test-preload-271062 kubelet[1121]: I0407 13:26:47.461159    1121 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f98c7d5e-7a3d-4fa6-af06-0ff774d44c61-xtables-lock\") pod \"kube-proxy-qg7pc\" (UID: \"f98c7d5e-7a3d-4fa6-af06-0ff774d44c61\") " pod="kube-system/kube-proxy-qg7pc"
	Apr 07 13:26:47 test-preload-271062 kubelet[1121]: I0407 13:26:47.461588    1121 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dbm7\" (UniqueName: \"kubernetes.io/projected/f7ac2922-9495-45c8-a013-84c2440685e0-kube-api-access-2dbm7\") pod \"storage-provisioner\" (UID: \"f7ac2922-9495-45c8-a013-84c2440685e0\") " pod="kube-system/storage-provisioner"
	Apr 07 13:26:47 test-preload-271062 kubelet[1121]: I0407 13:26:47.461719    1121 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkc6r\" (UniqueName: \"kubernetes.io/projected/4ae9f760-b70d-4ff4-9970-c14943d244dd-kube-api-access-pkc6r\") pod \"coredns-6d4b75cb6d-wdb4j\" (UID: \"4ae9f760-b70d-4ff4-9970-c14943d244dd\") " pod="kube-system/coredns-6d4b75cb6d-wdb4j"
	Apr 07 13:26:47 test-preload-271062 kubelet[1121]: I0407 13:26:47.461786    1121 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f98c7d5e-7a3d-4fa6-af06-0ff774d44c61-kube-proxy\") pod \"kube-proxy-qg7pc\" (UID: \"f98c7d5e-7a3d-4fa6-af06-0ff774d44c61\") " pod="kube-system/kube-proxy-qg7pc"
	Apr 07 13:26:47 test-preload-271062 kubelet[1121]: I0407 13:26:47.461839    1121 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7mcf\" (UniqueName: \"kubernetes.io/projected/f98c7d5e-7a3d-4fa6-af06-0ff774d44c61-kube-api-access-t7mcf\") pod \"kube-proxy-qg7pc\" (UID: \"f98c7d5e-7a3d-4fa6-af06-0ff774d44c61\") " pod="kube-system/kube-proxy-qg7pc"
	Apr 07 13:26:47 test-preload-271062 kubelet[1121]: I0407 13:26:47.461890    1121 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ae9f760-b70d-4ff4-9970-c14943d244dd-config-volume\") pod \"coredns-6d4b75cb6d-wdb4j\" (UID: \"4ae9f760-b70d-4ff4-9970-c14943d244dd\") " pod="kube-system/coredns-6d4b75cb6d-wdb4j"
	Apr 07 13:26:47 test-preload-271062 kubelet[1121]: I0407 13:26:47.462123    1121 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f98c7d5e-7a3d-4fa6-af06-0ff774d44c61-lib-modules\") pod \"kube-proxy-qg7pc\" (UID: \"f98c7d5e-7a3d-4fa6-af06-0ff774d44c61\") " pod="kube-system/kube-proxy-qg7pc"
	Apr 07 13:26:47 test-preload-271062 kubelet[1121]: I0407 13:26:47.462219    1121 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f7ac2922-9495-45c8-a013-84c2440685e0-tmp\") pod \"storage-provisioner\" (UID: \"f7ac2922-9495-45c8-a013-84c2440685e0\") " pod="kube-system/storage-provisioner"
	Apr 07 13:26:47 test-preload-271062 kubelet[1121]: I0407 13:26:47.462274    1121 reconciler.go:159] "Reconciler: start to sync state"
	Apr 07 13:26:47 test-preload-271062 kubelet[1121]: E0407 13:26:47.566926    1121 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 07 13:26:47 test-preload-271062 kubelet[1121]: E0407 13:26:47.567052    1121 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/4ae9f760-b70d-4ff4-9970-c14943d244dd-config-volume podName:4ae9f760-b70d-4ff4-9970-c14943d244dd nodeName:}" failed. No retries permitted until 2025-04-07 13:26:48.067024752 +0000 UTC m=+6.858108011 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4ae9f760-b70d-4ff4-9970-c14943d244dd-config-volume") pod "coredns-6d4b75cb6d-wdb4j" (UID: "4ae9f760-b70d-4ff4-9970-c14943d244dd") : object "kube-system"/"coredns" not registered
	Apr 07 13:26:48 test-preload-271062 kubelet[1121]: E0407 13:26:48.069176    1121 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 07 13:26:48 test-preload-271062 kubelet[1121]: E0407 13:26:48.069265    1121 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/4ae9f760-b70d-4ff4-9970-c14943d244dd-config-volume podName:4ae9f760-b70d-4ff4-9970-c14943d244dd nodeName:}" failed. No retries permitted until 2025-04-07 13:26:49.069248948 +0000 UTC m=+7.860332192 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4ae9f760-b70d-4ff4-9970-c14943d244dd-config-volume") pod "coredns-6d4b75cb6d-wdb4j" (UID: "4ae9f760-b70d-4ff4-9970-c14943d244dd") : object "kube-system"/"coredns" not registered
	Apr 07 13:26:48 test-preload-271062 kubelet[1121]: E0407 13:26:48.502403    1121 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-wdb4j" podUID=4ae9f760-b70d-4ff4-9970-c14943d244dd
	Apr 07 13:26:48 test-preload-271062 kubelet[1121]: I0407 13:26:48.552874    1121 scope.go:110] "RemoveContainer" containerID="6fca57d6b0b220589aaee13d9dd1c646be8242906a1dfdfb4ec0296a2de2ae35"
	Apr 07 13:26:49 test-preload-271062 kubelet[1121]: E0407 13:26:49.076749    1121 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 07 13:26:49 test-preload-271062 kubelet[1121]: E0407 13:26:49.076872    1121 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/4ae9f760-b70d-4ff4-9970-c14943d244dd-config-volume podName:4ae9f760-b70d-4ff4-9970-c14943d244dd nodeName:}" failed. No retries permitted until 2025-04-07 13:26:51.076855105 +0000 UTC m=+9.867938362 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4ae9f760-b70d-4ff4-9970-c14943d244dd-config-volume") pod "coredns-6d4b75cb6d-wdb4j" (UID: "4ae9f760-b70d-4ff4-9970-c14943d244dd") : object "kube-system"/"coredns" not registered
	Apr 07 13:26:49 test-preload-271062 kubelet[1121]: I0407 13:26:49.558786    1121 scope.go:110] "RemoveContainer" containerID="e88dba2f36e087e90c3ffba259550940c635ac3421a91f614003937cce602220"
	Apr 07 13:26:49 test-preload-271062 kubelet[1121]: E0407 13:26:49.559018    1121 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f7ac2922-9495-45c8-a013-84c2440685e0)\"" pod="kube-system/storage-provisioner" podUID=f7ac2922-9495-45c8-a013-84c2440685e0
	Apr 07 13:26:49 test-preload-271062 kubelet[1121]: I0407 13:26:49.559129    1121 scope.go:110] "RemoveContainer" containerID="6fca57d6b0b220589aaee13d9dd1c646be8242906a1dfdfb4ec0296a2de2ae35"
	Apr 07 13:26:50 test-preload-271062 kubelet[1121]: E0407 13:26:50.502184    1121 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-wdb4j" podUID=4ae9f760-b70d-4ff4-9970-c14943d244dd
	Apr 07 13:26:50 test-preload-271062 kubelet[1121]: I0407 13:26:50.564752    1121 scope.go:110] "RemoveContainer" containerID="e88dba2f36e087e90c3ffba259550940c635ac3421a91f614003937cce602220"
	Apr 07 13:26:50 test-preload-271062 kubelet[1121]: E0407 13:26:50.565715    1121 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f7ac2922-9495-45c8-a013-84c2440685e0)\"" pod="kube-system/storage-provisioner" podUID=f7ac2922-9495-45c8-a013-84c2440685e0
	Apr 07 13:26:51 test-preload-271062 kubelet[1121]: E0407 13:26:51.095579    1121 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 07 13:26:51 test-preload-271062 kubelet[1121]: E0407 13:26:51.095748    1121 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/4ae9f760-b70d-4ff4-9970-c14943d244dd-config-volume podName:4ae9f760-b70d-4ff4-9970-c14943d244dd nodeName:}" failed. No retries permitted until 2025-04-07 13:26:55.095723933 +0000 UTC m=+13.886807187 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4ae9f760-b70d-4ff4-9970-c14943d244dd-config-volume") pod "coredns-6d4b75cb6d-wdb4j" (UID: "4ae9f760-b70d-4ff4-9970-c14943d244dd") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [e88dba2f36e087e90c3ffba259550940c635ac3421a91f614003937cce602220] <==
	I0407 13:26:48.770681       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0407 13:26:48.773601       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-271062 -n test-preload-271062
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-271062 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-271062" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-271062
E0407 13:27:04.914985 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-271062: (1.326218148s)
--- FAIL: TestPreload (174.80s)

                                                
                                    
x
+
TestKubernetesUpgrade (368.95s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-973925 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0407 13:41:09.343609 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/functional-728898/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:42:04.915774 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-973925 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m38.085452884s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-973925] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20602-1162386/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1162386/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-973925" primary control-plane node in "kubernetes-upgrade-973925" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:39:14.956504 1216250 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:39:14.956840 1216250 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:39:14.956858 1216250 out.go:358] Setting ErrFile to fd 2...
	I0407 13:39:14.956873 1216250 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:39:14.957483 1216250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1162386/.minikube/bin
	I0407 13:39:14.958310 1216250 out.go:352] Setting JSON to false
	I0407 13:39:14.959686 1216250 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":19299,"bootTime":1744013856,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 13:39:14.959837 1216250 start.go:139] virtualization: kvm guest
	I0407 13:39:14.963000 1216250 out.go:177] * [kubernetes-upgrade-973925] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 13:39:14.966292 1216250 notify.go:220] Checking for updates...
	I0407 13:39:14.966359 1216250 out.go:177]   - MINIKUBE_LOCATION=20602
	I0407 13:39:14.968762 1216250 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 13:39:14.970920 1216250 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20602-1162386/kubeconfig
	I0407 13:39:14.972908 1216250 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1162386/.minikube
	I0407 13:39:14.974901 1216250 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 13:39:14.977248 1216250 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 13:39:14.981185 1216250 config.go:182] Loaded profile config "embed-certs-931633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:39:14.981332 1216250 config.go:182] Loaded profile config "no-preload-028452": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:39:14.981407 1216250 config.go:182] Loaded profile config "old-k8s-version-435730": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0407 13:39:14.981521 1216250 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 13:39:15.035368 1216250 out.go:177] * Using the kvm2 driver based on user configuration
	I0407 13:39:15.037644 1216250 start.go:297] selected driver: kvm2
	I0407 13:39:15.037676 1216250 start.go:901] validating driver "kvm2" against <nil>
	I0407 13:39:15.037693 1216250 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 13:39:15.038724 1216250 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 13:39:15.038877 1216250 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20602-1162386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0407 13:39:15.061325 1216250 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0407 13:39:15.061418 1216250 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 13:39:15.061824 1216250 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0407 13:39:15.061898 1216250 cni.go:84] Creating CNI manager for ""
	I0407 13:39:15.062073 1216250 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 13:39:15.062116 1216250 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0407 13:39:15.062217 1216250 start.go:340] cluster config:
	{Name:kubernetes-upgrade-973925 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-973925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:39:15.062409 1216250 iso.go:125] acquiring lock: {Name:mk51e1827709f7a3810dbd898083f8185ece65eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 13:39:15.065237 1216250 out.go:177] * Starting "kubernetes-upgrade-973925" primary control-plane node in "kubernetes-upgrade-973925" cluster
	I0407 13:39:15.067136 1216250 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0407 13:39:15.067225 1216250 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0407 13:39:15.067237 1216250 cache.go:56] Caching tarball of preloaded images
	I0407 13:39:15.067389 1216250 preload.go:172] Found /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0407 13:39:15.067409 1216250 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0407 13:39:15.067572 1216250 profile.go:143] Saving config to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925/config.json ...
	I0407 13:39:15.067619 1216250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925/config.json: {Name:mka056fa09ad7020cd4783643908782972a52dae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:39:15.067867 1216250 start.go:360] acquireMachinesLock for kubernetes-upgrade-973925: {Name:mk51d4c744fa92d56cf6ac11b1e792c85ef6709a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 13:39:15.067936 1216250 start.go:364] duration metric: took 40.857µs to acquireMachinesLock for "kubernetes-upgrade-973925"
	I0407 13:39:15.068005 1216250 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-973925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20
.0 ClusterName:kubernetes-upgrade-973925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0407 13:39:15.068098 1216250 start.go:125] createHost starting for "" (driver="kvm2")
	I0407 13:39:15.070413 1216250 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0407 13:39:15.070624 1216250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:39:15.070716 1216250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:39:15.090328 1216250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34107
	I0407 13:39:15.090991 1216250 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:39:15.091735 1216250 main.go:141] libmachine: Using API Version  1
	I0407 13:39:15.091760 1216250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:39:15.092255 1216250 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:39:15.092740 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetMachineName
	I0407 13:39:15.093118 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .DriverName
	I0407 13:39:15.093377 1216250 start.go:159] libmachine.API.Create for "kubernetes-upgrade-973925" (driver="kvm2")
	I0407 13:39:15.093419 1216250 client.go:168] LocalClient.Create starting
	I0407 13:39:15.093467 1216250 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem
	I0407 13:39:15.093514 1216250 main.go:141] libmachine: Decoding PEM data...
	I0407 13:39:15.093537 1216250 main.go:141] libmachine: Parsing certificate...
	I0407 13:39:15.093883 1216250 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem
	I0407 13:39:15.094024 1216250 main.go:141] libmachine: Decoding PEM data...
	I0407 13:39:15.094058 1216250 main.go:141] libmachine: Parsing certificate...
	I0407 13:39:15.094096 1216250 main.go:141] libmachine: Running pre-create checks...
	I0407 13:39:15.094112 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .PreCreateCheck
	I0407 13:39:15.094771 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetConfigRaw
	I0407 13:39:15.095413 1216250 main.go:141] libmachine: Creating machine...
	I0407 13:39:15.095434 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .Create
	I0407 13:39:15.095802 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) creating KVM machine...
	I0407 13:39:15.095838 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) creating network...
	I0407 13:39:15.098643 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | found existing default KVM network
	I0407 13:39:15.100042 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | I0407 13:39:15.099770 1216274 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:87:f6:77} reservation:<nil>}
	I0407 13:39:15.101510 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | I0407 13:39:15.101396 1216274 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002050a0}
	I0407 13:39:15.101592 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | created network xml: 
	I0407 13:39:15.101628 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | <network>
	I0407 13:39:15.101646 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG |   <name>mk-kubernetes-upgrade-973925</name>
	I0407 13:39:15.101657 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG |   <dns enable='no'/>
	I0407 13:39:15.101666 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG |   
	I0407 13:39:15.101675 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0407 13:39:15.101684 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG |     <dhcp>
	I0407 13:39:15.101725 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0407 13:39:15.101777 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG |     </dhcp>
	I0407 13:39:15.101811 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG |   </ip>
	I0407 13:39:15.101849 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG |   
	I0407 13:39:15.101877 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | </network>
	I0407 13:39:15.101897 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | 
	I0407 13:39:15.109612 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | trying to create private KVM network mk-kubernetes-upgrade-973925 192.168.50.0/24...
	I0407 13:39:15.225867 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) setting up store path in /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/kubernetes-upgrade-973925 ...
	I0407 13:39:15.225940 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | private KVM network mk-kubernetes-upgrade-973925 192.168.50.0/24 created
	I0407 13:39:15.226041 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) building disk image from file:///home/jenkins/minikube-integration/20602-1162386/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0407 13:39:15.226101 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | I0407 13:39:15.225686 1216274 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20602-1162386/.minikube
	I0407 13:39:15.226368 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Downloading /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20602-1162386/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0407 13:39:15.577557 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | I0407 13:39:15.577271 1216274 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/kubernetes-upgrade-973925/id_rsa...
	I0407 13:39:15.751470 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | I0407 13:39:15.751306 1216274 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/kubernetes-upgrade-973925/kubernetes-upgrade-973925.rawdisk...
	I0407 13:39:15.751518 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | Writing magic tar header
	I0407 13:39:15.751537 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | Writing SSH key tar header
	I0407 13:39:15.751547 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | I0407 13:39:15.751429 1216274 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/kubernetes-upgrade-973925 ...
	I0407 13:39:15.751563 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/kubernetes-upgrade-973925
	I0407 13:39:15.751575 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) setting executable bit set on /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/kubernetes-upgrade-973925 (perms=drwx------)
	I0407 13:39:15.751586 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines
	I0407 13:39:15.751604 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20602-1162386/.minikube
	I0407 13:39:15.751618 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20602-1162386
	I0407 13:39:15.751630 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0407 13:39:15.751640 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | checking permissions on dir: /home/jenkins
	I0407 13:39:15.751653 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | checking permissions on dir: /home
	I0407 13:39:15.751665 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | skipping /home - not owner
	I0407 13:39:15.751751 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) setting executable bit set on /home/jenkins/minikube-integration/20602-1162386/.minikube/machines (perms=drwxr-xr-x)
	I0407 13:39:15.751792 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) setting executable bit set on /home/jenkins/minikube-integration/20602-1162386/.minikube (perms=drwxr-xr-x)
	I0407 13:39:15.751807 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) setting executable bit set on /home/jenkins/minikube-integration/20602-1162386 (perms=drwxrwxr-x)
	I0407 13:39:15.751820 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0407 13:39:15.751846 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0407 13:39:15.751861 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) creating domain...
	I0407 13:39:15.753408 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) define libvirt domain using xml: 
	I0407 13:39:15.753437 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) <domain type='kvm'>
	I0407 13:39:15.753451 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)   <name>kubernetes-upgrade-973925</name>
	I0407 13:39:15.753458 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)   <memory unit='MiB'>2200</memory>
	I0407 13:39:15.753467 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)   <vcpu>2</vcpu>
	I0407 13:39:15.753474 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)   <features>
	I0407 13:39:15.753483 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)     <acpi/>
	I0407 13:39:15.753497 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)     <apic/>
	I0407 13:39:15.753503 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)     <pae/>
	I0407 13:39:15.753512 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)     
	I0407 13:39:15.753520 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)   </features>
	I0407 13:39:15.753531 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)   <cpu mode='host-passthrough'>
	I0407 13:39:15.753548 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)   
	I0407 13:39:15.753564 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)   </cpu>
	I0407 13:39:15.753572 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)   <os>
	I0407 13:39:15.753577 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)     <type>hvm</type>
	I0407 13:39:15.753582 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)     <boot dev='cdrom'/>
	I0407 13:39:15.753588 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)     <boot dev='hd'/>
	I0407 13:39:15.753597 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)     <bootmenu enable='no'/>
	I0407 13:39:15.753606 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)   </os>
	I0407 13:39:15.753614 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)   <devices>
	I0407 13:39:15.753626 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)     <disk type='file' device='cdrom'>
	I0407 13:39:15.753682 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)       <source file='/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/kubernetes-upgrade-973925/boot2docker.iso'/>
	I0407 13:39:15.753739 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)       <target dev='hdc' bus='scsi'/>
	I0407 13:39:15.753753 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)       <readonly/>
	I0407 13:39:15.753765 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)     </disk>
	I0407 13:39:15.753776 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)     <disk type='file' device='disk'>
	I0407 13:39:15.753789 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0407 13:39:15.753807 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)       <source file='/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/kubernetes-upgrade-973925/kubernetes-upgrade-973925.rawdisk'/>
	I0407 13:39:15.753819 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)       <target dev='hda' bus='virtio'/>
	I0407 13:39:15.753890 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)     </disk>
	I0407 13:39:15.753917 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)     <interface type='network'>
	I0407 13:39:15.753934 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)       <source network='mk-kubernetes-upgrade-973925'/>
	I0407 13:39:15.753947 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)       <model type='virtio'/>
	I0407 13:39:15.753955 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)     </interface>
	I0407 13:39:15.753977 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)     <interface type='network'>
	I0407 13:39:15.753989 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)       <source network='default'/>
	I0407 13:39:15.754007 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)       <model type='virtio'/>
	I0407 13:39:15.754020 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)     </interface>
	I0407 13:39:15.754035 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)     <serial type='pty'>
	I0407 13:39:15.754049 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)       <target port='0'/>
	I0407 13:39:15.754059 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)     </serial>
	I0407 13:39:15.754066 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)     <console type='pty'>
	I0407 13:39:15.754081 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)       <target type='serial' port='0'/>
	I0407 13:39:15.754092 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)     </console>
	I0407 13:39:15.754101 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)     <rng model='virtio'>
	I0407 13:39:15.754110 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)       <backend model='random'>/dev/random</backend>
	I0407 13:39:15.754120 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)     </rng>
	I0407 13:39:15.754127 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)     
	I0407 13:39:15.754136 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)     
	I0407 13:39:15.754144 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925)   </devices>
	I0407 13:39:15.754157 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) </domain>
	I0407 13:39:15.754170 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) 
	I0407 13:39:15.758724 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:36:5e:d7 in network default
	I0407 13:39:15.759490 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:15.759510 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) starting domain...
	I0407 13:39:15.759526 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) ensuring networks are active...
	I0407 13:39:15.760792 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Ensuring network default is active
	I0407 13:39:15.761182 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Ensuring network mk-kubernetes-upgrade-973925 is active
	I0407 13:39:15.762188 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) getting domain XML...
	I0407 13:39:15.763459 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) creating domain...
	I0407 13:39:17.390667 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) waiting for IP...
	I0407 13:39:17.392053 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:17.393013 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | unable to find current IP address of domain kubernetes-upgrade-973925 in network mk-kubernetes-upgrade-973925
	I0407 13:39:17.393096 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | I0407 13:39:17.392998 1216274 retry.go:31] will retry after 205.355102ms: waiting for domain to come up
	I0407 13:39:17.600872 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:17.601887 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | unable to find current IP address of domain kubernetes-upgrade-973925 in network mk-kubernetes-upgrade-973925
	I0407 13:39:17.601921 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | I0407 13:39:17.601817 1216274 retry.go:31] will retry after 348.397694ms: waiting for domain to come up
	I0407 13:39:17.951915 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:17.952702 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | unable to find current IP address of domain kubernetes-upgrade-973925 in network mk-kubernetes-upgrade-973925
	I0407 13:39:17.952746 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | I0407 13:39:17.952628 1216274 retry.go:31] will retry after 329.308151ms: waiting for domain to come up
	I0407 13:39:18.284582 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:18.285484 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | unable to find current IP address of domain kubernetes-upgrade-973925 in network mk-kubernetes-upgrade-973925
	I0407 13:39:18.285509 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | I0407 13:39:18.285417 1216274 retry.go:31] will retry after 496.28751ms: waiting for domain to come up
	I0407 13:39:18.783529 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:18.784386 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | unable to find current IP address of domain kubernetes-upgrade-973925 in network mk-kubernetes-upgrade-973925
	I0407 13:39:18.784420 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | I0407 13:39:18.784310 1216274 retry.go:31] will retry after 677.735933ms: waiting for domain to come up
	I0407 13:39:19.464217 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:19.465316 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | unable to find current IP address of domain kubernetes-upgrade-973925 in network mk-kubernetes-upgrade-973925
	I0407 13:39:19.465360 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | I0407 13:39:19.465213 1216274 retry.go:31] will retry after 894.833152ms: waiting for domain to come up
	I0407 13:39:20.362240 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:20.362784 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | unable to find current IP address of domain kubernetes-upgrade-973925 in network mk-kubernetes-upgrade-973925
	I0407 13:39:20.362915 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | I0407 13:39:20.362783 1216274 retry.go:31] will retry after 742.013194ms: waiting for domain to come up
	I0407 13:39:21.107046 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:21.107652 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | unable to find current IP address of domain kubernetes-upgrade-973925 in network mk-kubernetes-upgrade-973925
	I0407 13:39:21.107685 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | I0407 13:39:21.107593 1216274 retry.go:31] will retry after 1.086633977s: waiting for domain to come up
	I0407 13:39:22.195713 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:22.196178 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | unable to find current IP address of domain kubernetes-upgrade-973925 in network mk-kubernetes-upgrade-973925
	I0407 13:39:22.196219 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | I0407 13:39:22.196124 1216274 retry.go:31] will retry after 1.450369893s: waiting for domain to come up
	I0407 13:39:23.648490 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:23.648937 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | unable to find current IP address of domain kubernetes-upgrade-973925 in network mk-kubernetes-upgrade-973925
	I0407 13:39:23.648991 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | I0407 13:39:23.648912 1216274 retry.go:31] will retry after 1.835859769s: waiting for domain to come up
	I0407 13:39:25.487266 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:25.488083 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | unable to find current IP address of domain kubernetes-upgrade-973925 in network mk-kubernetes-upgrade-973925
	I0407 13:39:25.488104 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | I0407 13:39:25.488051 1216274 retry.go:31] will retry after 2.173197689s: waiting for domain to come up
	I0407 13:39:27.664820 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:27.665573 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | unable to find current IP address of domain kubernetes-upgrade-973925 in network mk-kubernetes-upgrade-973925
	I0407 13:39:27.665608 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | I0407 13:39:27.665501 1216274 retry.go:31] will retry after 3.506308771s: waiting for domain to come up
	I0407 13:39:31.173519 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:31.174513 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | unable to find current IP address of domain kubernetes-upgrade-973925 in network mk-kubernetes-upgrade-973925
	I0407 13:39:31.174546 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | I0407 13:39:31.174451 1216274 retry.go:31] will retry after 3.845051485s: waiting for domain to come up
	I0407 13:39:35.023235 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:35.023923 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | unable to find current IP address of domain kubernetes-upgrade-973925 in network mk-kubernetes-upgrade-973925
	I0407 13:39:35.024087 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | I0407 13:39:35.023875 1216274 retry.go:31] will retry after 4.891846104s: waiting for domain to come up
	I0407 13:39:39.918845 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:39.919414 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) found domain IP: 192.168.50.245
	I0407 13:39:39.919440 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) reserving static IP address...
	I0407 13:39:39.919457 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has current primary IP address 192.168.50.245 and MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:39.919904 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-973925", mac: "52:54:00:5e:eb:42", ip: "192.168.50.245"} in network mk-kubernetes-upgrade-973925
	I0407 13:39:40.034696 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) reserved static IP address 192.168.50.245 for domain kubernetes-upgrade-973925
	I0407 13:39:40.034733 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | Getting to WaitForSSH function...
	I0407 13:39:40.034743 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) waiting for SSH...
	I0407 13:39:40.039454 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:40.040000 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:5e:eb:42", ip: ""} in network mk-kubernetes-upgrade-973925
	I0407 13:39:40.040034 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | unable to find defined IP address of network mk-kubernetes-upgrade-973925 interface with MAC address 52:54:00:5e:eb:42
	I0407 13:39:40.040238 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | Using SSH client type: external
	I0407 13:39:40.040274 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | Using SSH private key: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/kubernetes-upgrade-973925/id_rsa (-rw-------)
	I0407 13:39:40.040356 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/kubernetes-upgrade-973925/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0407 13:39:40.040389 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | About to run SSH command:
	I0407 13:39:40.040411 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | exit 0
	I0407 13:39:40.046520 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | SSH cmd err, output: exit status 255: 
	I0407 13:39:40.046555 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0407 13:39:40.046566 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | command : exit 0
	I0407 13:39:40.046574 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | err     : exit status 255
	I0407 13:39:40.046585 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | output  : 
	I0407 13:39:43.049229 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | Getting to WaitForSSH function...
	I0407 13:39:43.053282 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:43.053895 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:eb:42", ip: ""} in network mk-kubernetes-upgrade-973925: {Iface:virbr2 ExpiryTime:2025-04-07 14:39:31 +0000 UTC Type:0 Mac:52:54:00:5e:eb:42 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:kubernetes-upgrade-973925 Clientid:01:52:54:00:5e:eb:42}
	I0407 13:39:43.053937 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined IP address 192.168.50.245 and MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:43.054116 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | Using SSH client type: external
	I0407 13:39:43.054152 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | Using SSH private key: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/kubernetes-upgrade-973925/id_rsa (-rw-------)
	I0407 13:39:43.054185 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/kubernetes-upgrade-973925/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0407 13:39:43.054205 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | About to run SSH command:
	I0407 13:39:43.054220 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | exit 0
	I0407 13:39:43.182078 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | SSH cmd err, output: <nil>: 
	I0407 13:39:43.182479 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) KVM machine creation complete
	I0407 13:39:43.182794 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetConfigRaw
	I0407 13:39:43.183427 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .DriverName
	I0407 13:39:43.183673 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .DriverName
	I0407 13:39:43.183859 1216250 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0407 13:39:43.183879 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetState
	I0407 13:39:43.185378 1216250 main.go:141] libmachine: Detecting operating system of created instance...
	I0407 13:39:43.185400 1216250 main.go:141] libmachine: Waiting for SSH to be available...
	I0407 13:39:43.185406 1216250 main.go:141] libmachine: Getting to WaitForSSH function...
	I0407 13:39:43.185413 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHHostname
	I0407 13:39:43.188457 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:43.188841 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:eb:42", ip: ""} in network mk-kubernetes-upgrade-973925: {Iface:virbr2 ExpiryTime:2025-04-07 14:39:31 +0000 UTC Type:0 Mac:52:54:00:5e:eb:42 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:kubernetes-upgrade-973925 Clientid:01:52:54:00:5e:eb:42}
	I0407 13:39:43.188874 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined IP address 192.168.50.245 and MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:43.189180 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHPort
	I0407 13:39:43.189487 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHKeyPath
	I0407 13:39:43.189694 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHKeyPath
	I0407 13:39:43.189947 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHUsername
	I0407 13:39:43.190254 1216250 main.go:141] libmachine: Using SSH client type: native
	I0407 13:39:43.190574 1216250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0407 13:39:43.190590 1216250 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0407 13:39:43.305678 1216250 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 13:39:43.305756 1216250 main.go:141] libmachine: Detecting the provisioner...
	I0407 13:39:43.305769 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHHostname
	I0407 13:39:43.309914 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:43.310589 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:eb:42", ip: ""} in network mk-kubernetes-upgrade-973925: {Iface:virbr2 ExpiryTime:2025-04-07 14:39:31 +0000 UTC Type:0 Mac:52:54:00:5e:eb:42 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:kubernetes-upgrade-973925 Clientid:01:52:54:00:5e:eb:42}
	I0407 13:39:43.310637 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined IP address 192.168.50.245 and MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:43.310832 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHPort
	I0407 13:39:43.311226 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHKeyPath
	I0407 13:39:43.311463 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHKeyPath
	I0407 13:39:43.311658 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHUsername
	I0407 13:39:43.311872 1216250 main.go:141] libmachine: Using SSH client type: native
	I0407 13:39:43.312167 1216250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0407 13:39:43.312181 1216250 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0407 13:39:43.423496 1216250 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0407 13:39:43.423580 1216250 main.go:141] libmachine: found compatible host: buildroot
	I0407 13:39:43.423594 1216250 main.go:141] libmachine: Provisioning with buildroot...
	I0407 13:39:43.423607 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetMachineName
	I0407 13:39:43.423947 1216250 buildroot.go:166] provisioning hostname "kubernetes-upgrade-973925"
	I0407 13:39:43.423986 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetMachineName
	I0407 13:39:43.424252 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHHostname
	I0407 13:39:43.428593 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:43.429091 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:eb:42", ip: ""} in network mk-kubernetes-upgrade-973925: {Iface:virbr2 ExpiryTime:2025-04-07 14:39:31 +0000 UTC Type:0 Mac:52:54:00:5e:eb:42 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:kubernetes-upgrade-973925 Clientid:01:52:54:00:5e:eb:42}
	I0407 13:39:43.429141 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined IP address 192.168.50.245 and MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:43.429389 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHPort
	I0407 13:39:43.429792 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHKeyPath
	I0407 13:39:43.430141 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHKeyPath
	I0407 13:39:43.430392 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHUsername
	I0407 13:39:43.430664 1216250 main.go:141] libmachine: Using SSH client type: native
	I0407 13:39:43.431014 1216250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0407 13:39:43.431038 1216250 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-973925 && echo "kubernetes-upgrade-973925" | sudo tee /etc/hostname
	I0407 13:39:43.560888 1216250 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-973925
	
	I0407 13:39:43.560930 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHHostname
	I0407 13:39:43.566213 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:43.566858 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:eb:42", ip: ""} in network mk-kubernetes-upgrade-973925: {Iface:virbr2 ExpiryTime:2025-04-07 14:39:31 +0000 UTC Type:0 Mac:52:54:00:5e:eb:42 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:kubernetes-upgrade-973925 Clientid:01:52:54:00:5e:eb:42}
	I0407 13:39:43.566910 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined IP address 192.168.50.245 and MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:43.567270 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHPort
	I0407 13:39:43.567611 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHKeyPath
	I0407 13:39:43.568054 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHKeyPath
	I0407 13:39:43.568329 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHUsername
	I0407 13:39:43.568643 1216250 main.go:141] libmachine: Using SSH client type: native
	I0407 13:39:43.568909 1216250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0407 13:39:43.568931 1216250 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-973925' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-973925/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-973925' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 13:39:43.689902 1216250 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 13:39:43.689964 1216250 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20602-1162386/.minikube CaCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20602-1162386/.minikube}
	I0407 13:39:43.690009 1216250 buildroot.go:174] setting up certificates
	I0407 13:39:43.690034 1216250 provision.go:84] configureAuth start
	I0407 13:39:43.690050 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetMachineName
	I0407 13:39:43.690555 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetIP
	I0407 13:39:43.694401 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:43.694901 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:eb:42", ip: ""} in network mk-kubernetes-upgrade-973925: {Iface:virbr2 ExpiryTime:2025-04-07 14:39:31 +0000 UTC Type:0 Mac:52:54:00:5e:eb:42 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:kubernetes-upgrade-973925 Clientid:01:52:54:00:5e:eb:42}
	I0407 13:39:43.694937 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined IP address 192.168.50.245 and MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:43.695230 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHHostname
	I0407 13:39:43.698788 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:43.699343 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:eb:42", ip: ""} in network mk-kubernetes-upgrade-973925: {Iface:virbr2 ExpiryTime:2025-04-07 14:39:31 +0000 UTC Type:0 Mac:52:54:00:5e:eb:42 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:kubernetes-upgrade-973925 Clientid:01:52:54:00:5e:eb:42}
	I0407 13:39:43.699369 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined IP address 192.168.50.245 and MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:43.699580 1216250 provision.go:143] copyHostCerts
	I0407 13:39:43.699664 1216250 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem, removing ...
	I0407 13:39:43.699693 1216250 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem
	I0407 13:39:43.699774 1216250 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem (1078 bytes)
	I0407 13:39:43.699951 1216250 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem, removing ...
	I0407 13:39:43.699966 1216250 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem
	I0407 13:39:43.700011 1216250 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem (1123 bytes)
	I0407 13:39:43.700093 1216250 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem, removing ...
	I0407 13:39:43.700105 1216250 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem
	I0407 13:39:43.700143 1216250 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem (1675 bytes)
	I0407 13:39:43.700218 1216250 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-973925 san=[127.0.0.1 192.168.50.245 kubernetes-upgrade-973925 localhost minikube]
	I0407 13:39:43.950845 1216250 provision.go:177] copyRemoteCerts
	I0407 13:39:43.950918 1216250 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 13:39:43.950974 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHHostname
	I0407 13:39:43.954098 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:43.954453 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:eb:42", ip: ""} in network mk-kubernetes-upgrade-973925: {Iface:virbr2 ExpiryTime:2025-04-07 14:39:31 +0000 UTC Type:0 Mac:52:54:00:5e:eb:42 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:kubernetes-upgrade-973925 Clientid:01:52:54:00:5e:eb:42}
	I0407 13:39:43.954484 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined IP address 192.168.50.245 and MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:43.954663 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHPort
	I0407 13:39:43.955046 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHKeyPath
	I0407 13:39:43.955324 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHUsername
	I0407 13:39:43.955531 1216250 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/kubernetes-upgrade-973925/id_rsa Username:docker}
	I0407 13:39:44.045293 1216250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0407 13:39:44.079121 1216250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 13:39:44.107952 1216250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0407 13:39:44.140597 1216250 provision.go:87] duration metric: took 450.544936ms to configureAuth
	I0407 13:39:44.140680 1216250 buildroot.go:189] setting minikube options for container-runtime
	I0407 13:39:44.140900 1216250 config.go:182] Loaded profile config "kubernetes-upgrade-973925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0407 13:39:44.141135 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHHostname
	I0407 13:39:44.145987 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:44.146411 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:eb:42", ip: ""} in network mk-kubernetes-upgrade-973925: {Iface:virbr2 ExpiryTime:2025-04-07 14:39:31 +0000 UTC Type:0 Mac:52:54:00:5e:eb:42 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:kubernetes-upgrade-973925 Clientid:01:52:54:00:5e:eb:42}
	I0407 13:39:44.146435 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined IP address 192.168.50.245 and MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:44.146781 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHPort
	I0407 13:39:44.147104 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHKeyPath
	I0407 13:39:44.147327 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHKeyPath
	I0407 13:39:44.147501 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHUsername
	I0407 13:39:44.147765 1216250 main.go:141] libmachine: Using SSH client type: native
	I0407 13:39:44.148023 1216250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0407 13:39:44.148043 1216250 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0407 13:39:44.405899 1216250 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0407 13:39:44.405936 1216250 main.go:141] libmachine: Checking connection to Docker...
	I0407 13:39:44.405950 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetURL
	I0407 13:39:44.407388 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | using libvirt version 6000000
	I0407 13:39:44.410819 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:44.411241 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:eb:42", ip: ""} in network mk-kubernetes-upgrade-973925: {Iface:virbr2 ExpiryTime:2025-04-07 14:39:31 +0000 UTC Type:0 Mac:52:54:00:5e:eb:42 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:kubernetes-upgrade-973925 Clientid:01:52:54:00:5e:eb:42}
	I0407 13:39:44.411278 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined IP address 192.168.50.245 and MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:44.411438 1216250 main.go:141] libmachine: Docker is up and running!
	I0407 13:39:44.411459 1216250 main.go:141] libmachine: Reticulating splines...
	I0407 13:39:44.411468 1216250 client.go:171] duration metric: took 29.318039015s to LocalClient.Create
	I0407 13:39:44.411496 1216250 start.go:167] duration metric: took 29.318123081s to libmachine.API.Create "kubernetes-upgrade-973925"
	I0407 13:39:44.411511 1216250 start.go:293] postStartSetup for "kubernetes-upgrade-973925" (driver="kvm2")
	I0407 13:39:44.411524 1216250 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 13:39:44.411548 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .DriverName
	I0407 13:39:44.411905 1216250 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 13:39:44.411955 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHHostname
	I0407 13:39:44.416108 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:44.416733 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:eb:42", ip: ""} in network mk-kubernetes-upgrade-973925: {Iface:virbr2 ExpiryTime:2025-04-07 14:39:31 +0000 UTC Type:0 Mac:52:54:00:5e:eb:42 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:kubernetes-upgrade-973925 Clientid:01:52:54:00:5e:eb:42}
	I0407 13:39:44.416771 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined IP address 192.168.50.245 and MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:44.417116 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHPort
	I0407 13:39:44.417368 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHKeyPath
	I0407 13:39:44.417567 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHUsername
	I0407 13:39:44.417774 1216250 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/kubernetes-upgrade-973925/id_rsa Username:docker}
	I0407 13:39:44.508203 1216250 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 13:39:44.513063 1216250 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 13:39:44.513097 1216250 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-1162386/.minikube/addons for local assets ...
	I0407 13:39:44.513170 1216250 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-1162386/.minikube/files for local assets ...
	I0407 13:39:44.513277 1216250 filesync.go:149] local asset: /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem -> 11697162.pem in /etc/ssl/certs
	I0407 13:39:44.513387 1216250 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 13:39:44.526064 1216250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem --> /etc/ssl/certs/11697162.pem (1708 bytes)
	I0407 13:39:44.559621 1216250 start.go:296] duration metric: took 148.090362ms for postStartSetup
	I0407 13:39:44.559683 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetConfigRaw
	I0407 13:39:44.560557 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetIP
	I0407 13:39:44.565137 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:44.565749 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:eb:42", ip: ""} in network mk-kubernetes-upgrade-973925: {Iface:virbr2 ExpiryTime:2025-04-07 14:39:31 +0000 UTC Type:0 Mac:52:54:00:5e:eb:42 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:kubernetes-upgrade-973925 Clientid:01:52:54:00:5e:eb:42}
	I0407 13:39:44.565788 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined IP address 192.168.50.245 and MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:44.566142 1216250 profile.go:143] Saving config to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925/config.json ...
	I0407 13:39:44.566387 1216250 start.go:128] duration metric: took 29.498276362s to createHost
	I0407 13:39:44.566415 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHHostname
	I0407 13:39:44.570864 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:44.571406 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:eb:42", ip: ""} in network mk-kubernetes-upgrade-973925: {Iface:virbr2 ExpiryTime:2025-04-07 14:39:31 +0000 UTC Type:0 Mac:52:54:00:5e:eb:42 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:kubernetes-upgrade-973925 Clientid:01:52:54:00:5e:eb:42}
	I0407 13:39:44.571464 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined IP address 192.168.50.245 and MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:44.571825 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHPort
	I0407 13:39:44.572268 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHKeyPath
	I0407 13:39:44.572734 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHKeyPath
	I0407 13:39:44.573085 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHUsername
	I0407 13:39:44.573415 1216250 main.go:141] libmachine: Using SSH client type: native
	I0407 13:39:44.573642 1216250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0407 13:39:44.573653 1216250 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0407 13:39:44.687754 1216250 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744033184.665215584
	
	I0407 13:39:44.687789 1216250 fix.go:216] guest clock: 1744033184.665215584
	I0407 13:39:44.687801 1216250 fix.go:229] Guest: 2025-04-07 13:39:44.665215584 +0000 UTC Remote: 2025-04-07 13:39:44.566402935 +0000 UTC m=+29.659778628 (delta=98.812649ms)
	I0407 13:39:44.687830 1216250 fix.go:200] guest clock delta is within tolerance: 98.812649ms
	I0407 13:39:44.687836 1216250 start.go:83] releasing machines lock for "kubernetes-upgrade-973925", held for 29.619849249s
	I0407 13:39:44.687857 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .DriverName
	I0407 13:39:44.688382 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetIP
	I0407 13:39:44.693368 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:44.694172 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:eb:42", ip: ""} in network mk-kubernetes-upgrade-973925: {Iface:virbr2 ExpiryTime:2025-04-07 14:39:31 +0000 UTC Type:0 Mac:52:54:00:5e:eb:42 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:kubernetes-upgrade-973925 Clientid:01:52:54:00:5e:eb:42}
	I0407 13:39:44.694250 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined IP address 192.168.50.245 and MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:44.694593 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .DriverName
	I0407 13:39:44.695592 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .DriverName
	I0407 13:39:44.695983 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .DriverName
	I0407 13:39:44.696155 1216250 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0407 13:39:44.696290 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHHostname
	I0407 13:39:44.696447 1216250 ssh_runner.go:195] Run: cat /version.json
	I0407 13:39:44.696508 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHHostname
	I0407 13:39:44.701065 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:44.701112 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:44.701554 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:eb:42", ip: ""} in network mk-kubernetes-upgrade-973925: {Iface:virbr2 ExpiryTime:2025-04-07 14:39:31 +0000 UTC Type:0 Mac:52:54:00:5e:eb:42 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:kubernetes-upgrade-973925 Clientid:01:52:54:00:5e:eb:42}
	I0407 13:39:44.701598 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined IP address 192.168.50.245 and MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:44.701627 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:eb:42", ip: ""} in network mk-kubernetes-upgrade-973925: {Iface:virbr2 ExpiryTime:2025-04-07 14:39:31 +0000 UTC Type:0 Mac:52:54:00:5e:eb:42 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:kubernetes-upgrade-973925 Clientid:01:52:54:00:5e:eb:42}
	I0407 13:39:44.701641 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined IP address 192.168.50.245 and MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:44.701936 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHPort
	I0407 13:39:44.702075 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHPort
	I0407 13:39:44.702189 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHKeyPath
	I0407 13:39:44.702286 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHKeyPath
	I0407 13:39:44.702375 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHUsername
	I0407 13:39:44.702453 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHUsername
	I0407 13:39:44.702569 1216250 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/kubernetes-upgrade-973925/id_rsa Username:docker}
	I0407 13:39:44.702652 1216250 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/kubernetes-upgrade-973925/id_rsa Username:docker}
	I0407 13:39:44.817349 1216250 ssh_runner.go:195] Run: systemctl --version
	I0407 13:39:44.825479 1216250 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0407 13:39:44.997949 1216250 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0407 13:39:45.004486 1216250 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 13:39:45.004575 1216250 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 13:39:45.023419 1216250 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0407 13:39:45.023448 1216250 start.go:495] detecting cgroup driver to use...
	I0407 13:39:45.023537 1216250 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 13:39:45.045131 1216250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 13:39:45.062029 1216250 docker.go:217] disabling cri-docker service (if available) ...
	I0407 13:39:45.062093 1216250 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0407 13:39:45.078434 1216250 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0407 13:39:45.096083 1216250 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0407 13:39:45.223489 1216250 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0407 13:39:45.406246 1216250 docker.go:233] disabling docker service ...
	I0407 13:39:45.406352 1216250 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0407 13:39:45.423059 1216250 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0407 13:39:45.439626 1216250 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0407 13:39:45.581086 1216250 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0407 13:39:45.721029 1216250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0407 13:39:45.742723 1216250 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 13:39:45.771480 1216250 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0407 13:39:45.771577 1216250 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:39:45.787203 1216250 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0407 13:39:45.787307 1216250 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:39:45.803529 1216250 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:39:45.819728 1216250 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:39:45.836055 1216250 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 13:39:45.851146 1216250 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 13:39:45.866360 1216250 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0407 13:39:45.866446 1216250 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0407 13:39:45.883933 1216250 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 13:39:45.897953 1216250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:39:46.044248 1216250 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0407 13:39:46.257132 1216250 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0407 13:39:46.257211 1216250 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0407 13:39:46.262969 1216250 start.go:563] Will wait 60s for crictl version
	I0407 13:39:46.263046 1216250 ssh_runner.go:195] Run: which crictl
	I0407 13:39:46.268162 1216250 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 13:39:46.314029 1216250 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0407 13:39:46.314156 1216250 ssh_runner.go:195] Run: crio --version
	I0407 13:39:46.347090 1216250 ssh_runner.go:195] Run: crio --version
	I0407 13:39:46.439521 1216250 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0407 13:39:46.510880 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetIP
	I0407 13:39:46.516340 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:46.517054 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:eb:42", ip: ""} in network mk-kubernetes-upgrade-973925: {Iface:virbr2 ExpiryTime:2025-04-07 14:39:31 +0000 UTC Type:0 Mac:52:54:00:5e:eb:42 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:kubernetes-upgrade-973925 Clientid:01:52:54:00:5e:eb:42}
	I0407 13:39:46.517106 1216250 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined IP address 192.168.50.245 and MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:39:46.517437 1216250 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0407 13:39:46.523395 1216250 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:39:46.540803 1216250 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-973925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kube
rnetes-upgrade-973925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 13:39:46.540984 1216250 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0407 13:39:46.541072 1216250 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 13:39:46.584391 1216250 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0407 13:39:46.584499 1216250 ssh_runner.go:195] Run: which lz4
	I0407 13:39:46.589644 1216250 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0407 13:39:46.594777 1216250 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0407 13:39:46.594818 1216250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0407 13:39:48.535259 1216250 crio.go:462] duration metric: took 1.945660112s to copy over tarball
	I0407 13:39:48.535408 1216250 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0407 13:39:51.546269 1216250 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.010810233s)
	I0407 13:39:51.546312 1216250 crio.go:469] duration metric: took 3.010998231s to extract the tarball
	I0407 13:39:51.546323 1216250 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0407 13:39:51.595574 1216250 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 13:39:51.653646 1216250 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0407 13:39:51.653686 1216250 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0407 13:39:51.653761 1216250 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 13:39:51.653908 1216250 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0407 13:39:51.653941 1216250 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0407 13:39:51.654055 1216250 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0407 13:39:51.654110 1216250 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0407 13:39:51.654065 1216250 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0407 13:39:51.654078 1216250 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 13:39:51.654089 1216250 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0407 13:39:51.655733 1216250 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0407 13:39:51.655755 1216250 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0407 13:39:51.655771 1216250 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0407 13:39:51.655782 1216250 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 13:39:51.655732 1216250 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0407 13:39:51.655736 1216250 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 13:39:51.655854 1216250 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0407 13:39:51.656049 1216250 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0407 13:39:51.795078 1216250 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0407 13:39:51.804977 1216250 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0407 13:39:51.811876 1216250 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 13:39:51.846819 1216250 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0407 13:39:51.855370 1216250 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0407 13:39:51.867858 1216250 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0407 13:39:51.872709 1216250 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0407 13:39:51.944755 1216250 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0407 13:39:51.944815 1216250 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0407 13:39:51.944872 1216250 ssh_runner.go:195] Run: which crictl
	I0407 13:39:51.944886 1216250 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0407 13:39:51.944934 1216250 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0407 13:39:51.944985 1216250 ssh_runner.go:195] Run: which crictl
	I0407 13:39:52.002742 1216250 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0407 13:39:52.002790 1216250 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 13:39:52.002835 1216250 ssh_runner.go:195] Run: which crictl
	I0407 13:39:52.042361 1216250 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0407 13:39:52.042487 1216250 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0407 13:39:52.042619 1216250 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0407 13:39:52.042655 1216250 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0407 13:39:52.042705 1216250 ssh_runner.go:195] Run: which crictl
	I0407 13:39:52.042423 1216250 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0407 13:39:52.042765 1216250 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0407 13:39:52.042794 1216250 ssh_runner.go:195] Run: which crictl
	I0407 13:39:52.042714 1216250 ssh_runner.go:195] Run: which crictl
	I0407 13:39:52.042735 1216250 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0407 13:39:52.042478 1216250 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0407 13:39:52.042882 1216250 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0407 13:39:52.042913 1216250 ssh_runner.go:195] Run: which crictl
	I0407 13:39:52.042947 1216250 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0407 13:39:52.042854 1216250 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 13:39:52.049242 1216250 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0407 13:39:52.158369 1216250 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0407 13:39:52.158390 1216250 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0407 13:39:52.158442 1216250 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 13:39:52.158472 1216250 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0407 13:39:52.158517 1216250 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0407 13:39:52.158551 1216250 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0407 13:39:52.158585 1216250 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0407 13:39:52.338514 1216250 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0407 13:39:52.338611 1216250 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0407 13:39:52.338612 1216250 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0407 13:39:52.338524 1216250 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0407 13:39:52.338734 1216250 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0407 13:39:52.338685 1216250 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 13:39:52.338806 1216250 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0407 13:39:52.533056 1216250 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0407 13:39:52.533073 1216250 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0407 13:39:52.533112 1216250 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0407 13:39:52.533132 1216250 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0407 13:39:52.533172 1216250 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0407 13:39:52.533244 1216250 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0407 13:39:52.533352 1216250 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0407 13:39:52.602383 1216250 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0407 13:39:52.604557 1216250 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0407 13:39:52.613248 1216250 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0407 13:39:53.274904 1216250 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 13:39:53.421541 1216250 cache_images.go:92] duration metric: took 1.767833232s to LoadCachedImages
	W0407 13:39:53.421684 1216250 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0407 13:39:53.421726 1216250 kubeadm.go:934] updating node { 192.168.50.245 8443 v1.20.0 crio true true} ...
	I0407 13:39:53.421848 1216250 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-973925 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-973925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 13:39:53.421942 1216250 ssh_runner.go:195] Run: crio config
	I0407 13:39:53.477014 1216250 cni.go:84] Creating CNI manager for ""
	I0407 13:39:53.477047 1216250 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 13:39:53.477059 1216250 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0407 13:39:53.477079 1216250 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.245 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-973925 NodeName:kubernetes-upgrade-973925 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0407 13:39:53.477219 1216250 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-973925"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 13:39:53.477295 1216250 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0407 13:39:53.489556 1216250 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 13:39:53.489648 1216250 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 13:39:53.501253 1216250 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0407 13:39:53.522331 1216250 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 13:39:53.542694 1216250 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0407 13:39:53.564632 1216250 ssh_runner.go:195] Run: grep 192.168.50.245	control-plane.minikube.internal$ /etc/hosts
	I0407 13:39:53.569362 1216250 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:39:53.584839 1216250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:39:53.727726 1216250 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:39:53.746795 1216250 certs.go:68] Setting up /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925 for IP: 192.168.50.245
	I0407 13:39:53.746828 1216250 certs.go:194] generating shared ca certs ...
	I0407 13:39:53.746849 1216250 certs.go:226] acquiring lock for ca certs: {Name:mk8e89191fca7f2111bdd08c345368f593b0d5d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:39:53.747068 1216250 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.key
	I0407 13:39:53.747137 1216250 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.key
	I0407 13:39:53.747152 1216250 certs.go:256] generating profile certs ...
	I0407 13:39:53.747225 1216250 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925/client.key
	I0407 13:39:53.747255 1216250 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925/client.crt with IP's: []
	I0407 13:39:53.880797 1216250 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925/client.crt ...
	I0407 13:39:53.880836 1216250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925/client.crt: {Name:mk7bc69e3d63252a792e36bf8da6d7931d8d2069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:39:53.881042 1216250 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925/client.key ...
	I0407 13:39:53.881059 1216250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925/client.key: {Name:mk5fc2fd88ce67fbefde1b963b96d3fa2a1ddc97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:39:53.881138 1216250 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925/apiserver.key.dcf03a85
	I0407 13:39:53.881154 1216250 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925/apiserver.crt.dcf03a85 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.245]
	I0407 13:39:54.484536 1216250 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925/apiserver.crt.dcf03a85 ...
	I0407 13:39:54.484586 1216250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925/apiserver.crt.dcf03a85: {Name:mkcec67a1af2f348e76547512b06f40a85de1c66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:39:54.484834 1216250 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925/apiserver.key.dcf03a85 ...
	I0407 13:39:54.484858 1216250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925/apiserver.key.dcf03a85: {Name:mkf78a2a360ae1084a23be456b6340524458a160 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:39:54.484978 1216250 certs.go:381] copying /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925/apiserver.crt.dcf03a85 -> /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925/apiserver.crt
	I0407 13:39:54.485104 1216250 certs.go:385] copying /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925/apiserver.key.dcf03a85 -> /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925/apiserver.key
	I0407 13:39:54.485198 1216250 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925/proxy-client.key
	I0407 13:39:54.485222 1216250 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925/proxy-client.crt with IP's: []
	I0407 13:39:55.123494 1216250 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925/proxy-client.crt ...
	I0407 13:39:55.123540 1216250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925/proxy-client.crt: {Name:mk42673e9d4f3afbaa5807d1fee68852d465c54a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:39:55.123771 1216250 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925/proxy-client.key ...
	I0407 13:39:55.123789 1216250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925/proxy-client.key: {Name:mk845e1f427b7a13d2cb058526496bb984ec38c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:39:55.124007 1216250 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716.pem (1338 bytes)
	W0407 13:39:55.124050 1216250 certs.go:480] ignoring /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716_empty.pem, impossibly tiny 0 bytes
	I0407 13:39:55.124062 1216250 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem (1675 bytes)
	I0407 13:39:55.124083 1216250 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem (1078 bytes)
	I0407 13:39:55.124107 1216250 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem (1123 bytes)
	I0407 13:39:55.124127 1216250 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem (1675 bytes)
	I0407 13:39:55.124161 1216250 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem (1708 bytes)
	I0407 13:39:55.124728 1216250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 13:39:55.154234 1216250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 13:39:55.186014 1216250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 13:39:55.216174 1216250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0407 13:39:55.246138 1216250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0407 13:39:55.275880 1216250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0407 13:39:55.307828 1216250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 13:39:55.339437 1216250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0407 13:39:55.369163 1216250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem --> /usr/share/ca-certificates/11697162.pem (1708 bytes)
	I0407 13:39:55.400761 1216250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 13:39:55.429661 1216250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716.pem --> /usr/share/ca-certificates/1169716.pem (1338 bytes)
	I0407 13:39:55.458846 1216250 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 13:39:55.479757 1216250 ssh_runner.go:195] Run: openssl version
	I0407 13:39:55.486211 1216250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 13:39:55.500081 1216250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:39:55.506320 1216250 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:14 /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:39:55.506397 1216250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:39:55.513551 1216250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 13:39:55.527789 1216250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1169716.pem && ln -fs /usr/share/ca-certificates/1169716.pem /etc/ssl/certs/1169716.pem"
	I0407 13:39:55.544808 1216250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1169716.pem
	I0407 13:39:55.555940 1216250 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 12:22 /usr/share/ca-certificates/1169716.pem
	I0407 13:39:55.556037 1216250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1169716.pem
	I0407 13:39:55.563885 1216250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1169716.pem /etc/ssl/certs/51391683.0"
	I0407 13:39:55.578241 1216250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11697162.pem && ln -fs /usr/share/ca-certificates/11697162.pem /etc/ssl/certs/11697162.pem"
	I0407 13:39:55.597629 1216250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11697162.pem
	I0407 13:39:55.603915 1216250 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 12:22 /usr/share/ca-certificates/11697162.pem
	I0407 13:39:55.603987 1216250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11697162.pem
	I0407 13:39:55.615203 1216250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11697162.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 13:39:55.636670 1216250 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 13:39:55.643556 1216250 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0407 13:39:55.643772 1216250 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-973925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kuberne
tes-upgrade-973925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:39:55.643904 1216250 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0407 13:39:55.643986 1216250 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0407 13:39:55.703154 1216250 cri.go:89] found id: ""
	I0407 13:39:55.703262 1216250 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0407 13:39:55.718420 1216250 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0407 13:39:55.732926 1216250 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 13:39:55.744582 1216250 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 13:39:55.744610 1216250 kubeadm.go:157] found existing configuration files:
	
	I0407 13:39:55.744661 1216250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 13:39:55.756907 1216250 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 13:39:55.757079 1216250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 13:39:55.769188 1216250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 13:39:55.781918 1216250 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 13:39:55.781998 1216250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 13:39:55.795381 1216250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 13:39:55.808769 1216250 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 13:39:55.808868 1216250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 13:39:55.822117 1216250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 13:39:55.834812 1216250 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 13:39:55.834885 1216250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 13:39:55.846708 1216250 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0407 13:39:56.234245 1216250 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0407 13:41:54.711801 1216250 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0407 13:41:54.712090 1216250 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0407 13:41:54.713185 1216250 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0407 13:41:54.713314 1216250 kubeadm.go:310] [preflight] Running pre-flight checks
	I0407 13:41:54.713546 1216250 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0407 13:41:54.713946 1216250 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0407 13:41:54.714209 1216250 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0407 13:41:54.714320 1216250 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 13:41:54.716511 1216250 out.go:235]   - Generating certificates and keys ...
	I0407 13:41:54.716701 1216250 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0407 13:41:54.716840 1216250 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0407 13:41:54.717025 1216250 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0407 13:41:54.717173 1216250 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0407 13:41:54.717312 1216250 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0407 13:41:54.717386 1216250 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0407 13:41:54.717482 1216250 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0407 13:41:54.717652 1216250 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-973925 localhost] and IPs [192.168.50.245 127.0.0.1 ::1]
	I0407 13:41:54.717746 1216250 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0407 13:41:54.717856 1216250 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-973925 localhost] and IPs [192.168.50.245 127.0.0.1 ::1]
	I0407 13:41:54.717938 1216250 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0407 13:41:54.718020 1216250 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0407 13:41:54.718059 1216250 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0407 13:41:54.718130 1216250 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 13:41:54.718187 1216250 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 13:41:54.718253 1216250 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 13:41:54.718330 1216250 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 13:41:54.718402 1216250 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 13:41:54.718512 1216250 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 13:41:54.718589 1216250 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 13:41:54.718622 1216250 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0407 13:41:54.718676 1216250 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 13:41:54.720494 1216250 out.go:235]   - Booting up control plane ...
	I0407 13:41:54.720634 1216250 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 13:41:54.720744 1216250 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 13:41:54.720835 1216250 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 13:41:54.720962 1216250 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 13:41:54.721131 1216250 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0407 13:41:54.721217 1216250 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0407 13:41:54.721297 1216250 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:41:54.721484 1216250 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:41:54.721555 1216250 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:41:54.721762 1216250 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:41:54.721822 1216250 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:41:54.722001 1216250 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:41:54.722111 1216250 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:41:54.722306 1216250 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:41:54.722392 1216250 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:41:54.722569 1216250 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:41:54.722587 1216250 kubeadm.go:310] 
	I0407 13:41:54.722645 1216250 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0407 13:41:54.722693 1216250 kubeadm.go:310] 		timed out waiting for the condition
	I0407 13:41:54.722700 1216250 kubeadm.go:310] 
	I0407 13:41:54.722731 1216250 kubeadm.go:310] 	This error is likely caused by:
	I0407 13:41:54.722763 1216250 kubeadm.go:310] 		- The kubelet is not running
	I0407 13:41:54.722856 1216250 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0407 13:41:54.722863 1216250 kubeadm.go:310] 
	I0407 13:41:54.722948 1216250 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0407 13:41:54.722985 1216250 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0407 13:41:54.723011 1216250 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0407 13:41:54.723018 1216250 kubeadm.go:310] 
	I0407 13:41:54.723142 1216250 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0407 13:41:54.723251 1216250 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0407 13:41:54.723260 1216250 kubeadm.go:310] 
	I0407 13:41:54.723347 1216250 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0407 13:41:54.723433 1216250 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0407 13:41:54.723493 1216250 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0407 13:41:54.723554 1216250 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0407 13:41:54.723600 1216250 kubeadm.go:310] 
	W0407 13:41:54.723706 1216250 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-973925 localhost] and IPs [192.168.50.245 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-973925 localhost] and IPs [192.168.50.245 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-973925 localhost] and IPs [192.168.50.245 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-973925 localhost] and IPs [192.168.50.245 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0407 13:41:54.723748 1216250 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0407 13:41:55.183855 1216250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:41:55.201646 1216250 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 13:41:55.214680 1216250 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 13:41:55.214705 1216250 kubeadm.go:157] found existing configuration files:
	
	I0407 13:41:55.214756 1216250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 13:41:55.227997 1216250 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 13:41:55.228069 1216250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 13:41:55.238753 1216250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 13:41:55.250353 1216250 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 13:41:55.250449 1216250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 13:41:55.261533 1216250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 13:41:55.273840 1216250 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 13:41:55.273922 1216250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 13:41:55.285448 1216250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 13:41:55.297282 1216250 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 13:41:55.297352 1216250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 13:41:55.311818 1216250 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0407 13:41:55.553572 1216250 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0407 13:43:51.726660 1216250 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0407 13:43:51.726819 1216250 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0407 13:43:51.728652 1216250 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0407 13:43:51.728746 1216250 kubeadm.go:310] [preflight] Running pre-flight checks
	I0407 13:43:51.728865 1216250 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0407 13:43:51.728992 1216250 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0407 13:43:51.729104 1216250 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0407 13:43:51.729198 1216250 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 13:43:51.863523 1216250 out.go:235]   - Generating certificates and keys ...
	I0407 13:43:51.863708 1216250 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0407 13:43:51.863789 1216250 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0407 13:43:51.863892 1216250 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0407 13:43:51.863991 1216250 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0407 13:43:51.864066 1216250 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0407 13:43:51.864141 1216250 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0407 13:43:51.864230 1216250 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0407 13:43:51.864322 1216250 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0407 13:43:51.864399 1216250 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0407 13:43:51.864541 1216250 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0407 13:43:51.864639 1216250 kubeadm.go:310] [certs] Using the existing "sa" key
	I0407 13:43:51.864744 1216250 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 13:43:51.864829 1216250 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 13:43:51.864913 1216250 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 13:43:51.865020 1216250 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 13:43:51.865109 1216250 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 13:43:51.865254 1216250 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 13:43:51.865394 1216250 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 13:43:51.865467 1216250 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0407 13:43:51.865563 1216250 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 13:43:51.917105 1216250 out.go:235]   - Booting up control plane ...
	I0407 13:43:51.917259 1216250 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 13:43:51.917369 1216250 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 13:43:51.917457 1216250 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 13:43:51.917563 1216250 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 13:43:51.917797 1216250 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0407 13:43:51.917891 1216250 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0407 13:43:51.918011 1216250 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:43:51.918250 1216250 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:43:51.918341 1216250 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:43:51.918563 1216250 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:43:51.918647 1216250 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:43:51.918849 1216250 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:43:51.918953 1216250 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:43:51.919233 1216250 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:43:51.919370 1216250 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:43:51.919688 1216250 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:43:51.919709 1216250 kubeadm.go:310] 
	I0407 13:43:51.919771 1216250 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0407 13:43:51.919822 1216250 kubeadm.go:310] 		timed out waiting for the condition
	I0407 13:43:51.919834 1216250 kubeadm.go:310] 
	I0407 13:43:51.919879 1216250 kubeadm.go:310] 	This error is likely caused by:
	I0407 13:43:51.919933 1216250 kubeadm.go:310] 		- The kubelet is not running
	I0407 13:43:51.920084 1216250 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0407 13:43:51.920098 1216250 kubeadm.go:310] 
	I0407 13:43:51.920250 1216250 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0407 13:43:51.920346 1216250 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0407 13:43:51.920419 1216250 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0407 13:43:51.920442 1216250 kubeadm.go:310] 
	I0407 13:43:51.920592 1216250 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0407 13:43:51.920713 1216250 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0407 13:43:51.920725 1216250 kubeadm.go:310] 
	I0407 13:43:51.920883 1216250 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0407 13:43:51.921007 1216250 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0407 13:43:51.921134 1216250 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0407 13:43:51.921231 1216250 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0407 13:43:51.921239 1216250 kubeadm.go:310] 
	I0407 13:43:51.921319 1216250 kubeadm.go:394] duration metric: took 3m56.277583372s to StartCluster
	I0407 13:43:51.921364 1216250 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:43:51.921421 1216250 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:43:51.957917 1216250 cri.go:89] found id: ""
	I0407 13:43:51.957950 1216250 logs.go:282] 0 containers: []
	W0407 13:43:51.957961 1216250 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:43:51.957970 1216250 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:43:51.958071 1216250 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:43:51.995655 1216250 cri.go:89] found id: ""
	I0407 13:43:51.995702 1216250 logs.go:282] 0 containers: []
	W0407 13:43:51.995715 1216250 logs.go:284] No container was found matching "etcd"
	I0407 13:43:51.995724 1216250 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:43:51.995794 1216250 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:43:52.038916 1216250 cri.go:89] found id: ""
	I0407 13:43:52.038962 1216250 logs.go:282] 0 containers: []
	W0407 13:43:52.038975 1216250 logs.go:284] No container was found matching "coredns"
	I0407 13:43:52.038984 1216250 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:43:52.039070 1216250 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:43:52.079783 1216250 cri.go:89] found id: ""
	I0407 13:43:52.079820 1216250 logs.go:282] 0 containers: []
	W0407 13:43:52.079832 1216250 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:43:52.079841 1216250 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:43:52.079917 1216250 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:43:52.119819 1216250 cri.go:89] found id: ""
	I0407 13:43:52.119854 1216250 logs.go:282] 0 containers: []
	W0407 13:43:52.119865 1216250 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:43:52.119874 1216250 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:43:52.119953 1216250 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:43:52.169634 1216250 cri.go:89] found id: ""
	I0407 13:43:52.169674 1216250 logs.go:282] 0 containers: []
	W0407 13:43:52.169689 1216250 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:43:52.169698 1216250 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:43:52.169808 1216250 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:43:52.207142 1216250 cri.go:89] found id: ""
	I0407 13:43:52.207178 1216250 logs.go:282] 0 containers: []
	W0407 13:43:52.207188 1216250 logs.go:284] No container was found matching "kindnet"
	I0407 13:43:52.207204 1216250 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:43:52.207221 1216250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:43:52.321792 1216250 logs.go:123] Gathering logs for container status ...
	I0407 13:43:52.321839 1216250 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:43:52.379731 1216250 logs.go:123] Gathering logs for kubelet ...
	I0407 13:43:52.379772 1216250 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:43:52.447530 1216250 logs.go:123] Gathering logs for dmesg ...
	I0407 13:43:52.447579 1216250 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:43:52.468014 1216250 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:43:52.468086 1216250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:43:52.600251 1216250 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0407 13:43:52.600286 1216250 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0407 13:43:52.600364 1216250 out.go:270] * 
	* 
	W0407 13:43:52.600438 1216250 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0407 13:43:52.600457 1216250 out.go:270] * 
	* 
	W0407 13:43:52.601340 1216250 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0407 13:43:52.700881 1216250 out.go:201] 
	W0407 13:43:52.775357 1216250 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0407 13:43:52.775420 1216250 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0407 13:43:52.775449 1216250 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0407 13:43:52.924671 1216250 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-973925 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-973925
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-973925: (5.63954584s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-973925 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-973925 status --format={{.Host}}: exit status 7 (82.894029ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-973925 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-973925 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.632584774s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-973925 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-973925 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-973925 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (114.325119ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-973925] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20602-1162386/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1162386/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-973925
	    minikube start -p kubernetes-upgrade-973925 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9739252 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.2, by running:
	    
	    minikube start -p kubernetes-upgrade-973925 --kubernetes-version=v1.32.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-973925 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0407 13:44:45.736658 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/no-preload-028452/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-973925 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (33.991166696s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-04-07 13:45:19.575739115 +0000 UTC m=+5515.340650769
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-973925 -n kubernetes-upgrade-973925
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-973925 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-973925 logs -n 25: (2.188194816s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p pause-111763                                       | pause-111763                 | jenkins | v1.35.0 | 07 Apr 25 13:38 UTC | 07 Apr 25 13:39 UTC |
	|         | --alsologtostderr                                     |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                              |         |         |                     |                     |
	|         | --container-runtime=crio                              |                              |         |         |                     |                     |
	| delete  | -p pause-111763                                       | pause-111763                 | jenkins | v1.35.0 | 07 Apr 25 13:39 UTC | 07 Apr 25 13:39 UTC |
	| start   | -p kubernetes-upgrade-973925                          | kubernetes-upgrade-973925    | jenkins | v1.35.0 | 07 Apr 25 13:39 UTC |                     |
	|         | --memory=2200                                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                     |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                              |         |         |                     |                     |
	|         | --container-runtime=crio                              |                              |         |         |                     |                     |
	| image   | embed-certs-931633 image list                         | embed-certs-931633           | jenkins | v1.35.0 | 07 Apr 25 13:42 UTC | 07 Apr 25 13:42 UTC |
	|         | --format=json                                         |                              |         |         |                     |                     |
	| pause   | -p embed-certs-931633                                 | embed-certs-931633           | jenkins | v1.35.0 | 07 Apr 25 13:42 UTC | 07 Apr 25 13:42 UTC |
	|         | --alsologtostderr -v=1                                |                              |         |         |                     |                     |
	| unpause | -p embed-certs-931633                                 | embed-certs-931633           | jenkins | v1.35.0 | 07 Apr 25 13:42 UTC | 07 Apr 25 13:42 UTC |
	|         | --alsologtostderr -v=1                                |                              |         |         |                     |                     |
	| delete  | -p embed-certs-931633                                 | embed-certs-931633           | jenkins | v1.35.0 | 07 Apr 25 13:42 UTC | 07 Apr 25 13:42 UTC |
	| delete  | -p embed-certs-931633                                 | embed-certs-931633           | jenkins | v1.35.0 | 07 Apr 25 13:42 UTC | 07 Apr 25 13:42 UTC |
	| start   | -p stopped-upgrade-392390                             | minikube                     | jenkins | v1.26.0 | 07 Apr 25 13:42 UTC | 07 Apr 25 13:43 UTC |
	|         | --memory=2200 --vm-driver=kvm2                        |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                              |         |         |                     |                     |
	| image   | no-preload-028452 image list                          | no-preload-028452            | jenkins | v1.35.0 | 07 Apr 25 13:43 UTC | 07 Apr 25 13:43 UTC |
	|         | --format=json                                         |                              |         |         |                     |                     |
	| pause   | -p no-preload-028452                                  | no-preload-028452            | jenkins | v1.35.0 | 07 Apr 25 13:43 UTC | 07 Apr 25 13:43 UTC |
	|         | --alsologtostderr -v=1                                |                              |         |         |                     |                     |
	| unpause | -p no-preload-028452                                  | no-preload-028452            | jenkins | v1.35.0 | 07 Apr 25 13:43 UTC | 07 Apr 25 13:43 UTC |
	|         | --alsologtostderr -v=1                                |                              |         |         |                     |                     |
	| delete  | -p no-preload-028452                                  | no-preload-028452            | jenkins | v1.35.0 | 07 Apr 25 13:43 UTC | 07 Apr 25 13:43 UTC |
	| delete  | -p no-preload-028452                                  | no-preload-028452            | jenkins | v1.35.0 | 07 Apr 25 13:43 UTC | 07 Apr 25 13:43 UTC |
	| start   | -p                                                    | default-k8s-diff-port-405061 | jenkins | v1.35.0 | 07 Apr 25 13:43 UTC | 07 Apr 25 13:44 UTC |
	|         | default-k8s-diff-port-405061                          |                              |         |         |                     |                     |
	|         | --memory=2200                                         |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                 |                              |         |         |                     |                     |
	|         | --driver=kvm2                                         |                              |         |         |                     |                     |
	|         | --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                          |                              |         |         |                     |                     |
	| stop    | stopped-upgrade-392390 stop                           | minikube                     | jenkins | v1.26.0 | 07 Apr 25 13:43 UTC | 07 Apr 25 13:43 UTC |
	| start   | -p stopped-upgrade-392390                             | stopped-upgrade-392390       | jenkins | v1.35.0 | 07 Apr 25 13:43 UTC | 07 Apr 25 13:44 UTC |
	|         | --memory=2200                                         |                              |         |         |                     |                     |
	|         | --alsologtostderr                                     |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                              |         |         |                     |                     |
	|         | --container-runtime=crio                              |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-973925                          | kubernetes-upgrade-973925    | jenkins | v1.35.0 | 07 Apr 25 13:43 UTC | 07 Apr 25 13:43 UTC |
	| start   | -p kubernetes-upgrade-973925                          | kubernetes-upgrade-973925    | jenkins | v1.35.0 | 07 Apr 25 13:43 UTC | 07 Apr 25 13:44 UTC |
	|         | --memory=2200                                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                     |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                              |         |         |                     |                     |
	|         | --container-runtime=crio                              |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-405061 | default-k8s-diff-port-405061 | jenkins | v1.35.0 | 07 Apr 25 13:44 UTC | 07 Apr 25 13:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                              |         |         |                     |                     |
	| stop    | -p                                                    | default-k8s-diff-port-405061 | jenkins | v1.35.0 | 07 Apr 25 13:44 UTC |                     |
	|         | default-k8s-diff-port-405061                          |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-392390                             | stopped-upgrade-392390       | jenkins | v1.35.0 | 07 Apr 25 13:44 UTC | 07 Apr 25 13:44 UTC |
	| start   | -p newest-cni-896794 --memory=2200 --alsologtostderr  | newest-cni-896794            | jenkins | v1.35.0 | 07 Apr 25 13:44 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa               |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16  |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                          |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-973925                          | kubernetes-upgrade-973925    | jenkins | v1.35.0 | 07 Apr 25 13:44 UTC |                     |
	|         | --memory=2200                                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                              |         |         |                     |                     |
	|         | --driver=kvm2                                         |                              |         |         |                     |                     |
	|         | --container-runtime=crio                              |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-973925                          | kubernetes-upgrade-973925    | jenkins | v1.35.0 | 07 Apr 25 13:44 UTC | 07 Apr 25 13:45 UTC |
	|         | --memory=2200                                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                     |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                              |         |         |                     |                     |
	|         | --container-runtime=crio                              |                              |         |         |                     |                     |
	|---------|-------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 13:44:45
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 13:44:45.650244 1219904 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:44:45.650355 1219904 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:44:45.650362 1219904 out.go:358] Setting ErrFile to fd 2...
	I0407 13:44:45.650368 1219904 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:44:45.650575 1219904 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1162386/.minikube/bin
	I0407 13:44:45.651144 1219904 out.go:352] Setting JSON to false
	I0407 13:44:45.652264 1219904 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":19630,"bootTime":1744013856,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 13:44:45.652329 1219904 start.go:139] virtualization: kvm guest
	I0407 13:44:45.654878 1219904 out.go:177] * [kubernetes-upgrade-973925] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 13:44:45.656758 1219904 out.go:177]   - MINIKUBE_LOCATION=20602
	I0407 13:44:45.656767 1219904 notify.go:220] Checking for updates...
	I0407 13:44:45.658330 1219904 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 13:44:45.659898 1219904 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20602-1162386/kubeconfig
	I0407 13:44:45.661430 1219904 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1162386/.minikube
	I0407 13:44:45.663129 1219904 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 13:44:45.664968 1219904 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 13:44:45.666997 1219904 config.go:182] Loaded profile config "kubernetes-upgrade-973925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:44:45.667631 1219904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:44:45.667709 1219904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:44:45.689430 1219904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32899
	I0407 13:44:45.690161 1219904 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:44:45.691032 1219904 main.go:141] libmachine: Using API Version  1
	I0407 13:44:45.691067 1219904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:44:45.691669 1219904 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:44:45.691919 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .DriverName
	I0407 13:44:45.692270 1219904 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 13:44:45.692734 1219904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:44:45.692795 1219904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:44:45.709859 1219904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41887
	I0407 13:44:45.710644 1219904 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:44:45.711393 1219904 main.go:141] libmachine: Using API Version  1
	I0407 13:44:45.711427 1219904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:44:45.711915 1219904 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:44:45.712189 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .DriverName
	I0407 13:44:45.760432 1219904 out.go:177] * Using the kvm2 driver based on existing profile
	I0407 13:44:45.761790 1219904 start.go:297] selected driver: kvm2
	I0407 13:44:45.761811 1219904 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-973925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 C
lusterName:kubernetes-upgrade-973925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:44:45.761984 1219904 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 13:44:45.762924 1219904 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 13:44:45.763014 1219904 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20602-1162386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0407 13:44:45.780051 1219904 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0407 13:44:45.780487 1219904 cni.go:84] Creating CNI manager for ""
	I0407 13:44:45.780532 1219904 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 13:44:45.780572 1219904 start.go:340] cluster config:
	{Name:kubernetes-upgrade-973925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kubernetes-upgrade-973925 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:44:45.780727 1219904 iso.go:125] acquiring lock: {Name:mk51e1827709f7a3810dbd898083f8185ece65eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 13:44:45.782746 1219904 out.go:177] * Starting "kubernetes-upgrade-973925" primary control-plane node in "kubernetes-upgrade-973925" cluster
	I0407 13:44:43.625038 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:43.625658 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | unable to find current IP address of domain newest-cni-896794 in network mk-newest-cni-896794
	I0407 13:44:43.625732 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | I0407 13:44:43.625564 1219687 retry.go:31] will retry after 2.097222099s: waiting for domain to come up
	I0407 13:44:45.725767 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:45.726272 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | unable to find current IP address of domain newest-cni-896794 in network mk-newest-cni-896794
	I0407 13:44:45.726299 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | I0407 13:44:45.726177 1219687 retry.go:31] will retry after 3.393705088s: waiting for domain to come up
	I0407 13:44:45.784515 1219904 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 13:44:45.784589 1219904 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0407 13:44:45.784603 1219904 cache.go:56] Caching tarball of preloaded images
	I0407 13:44:45.784721 1219904 preload.go:172] Found /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0407 13:44:45.784737 1219904 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0407 13:44:45.784848 1219904 profile.go:143] Saving config to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925/config.json ...
	I0407 13:44:45.785102 1219904 start.go:360] acquireMachinesLock for kubernetes-upgrade-973925: {Name:mk51d4c744fa92d56cf6ac11b1e792c85ef6709a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 13:44:49.122138 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:49.122874 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | unable to find current IP address of domain newest-cni-896794 in network mk-newest-cni-896794
	I0407 13:44:49.122910 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | I0407 13:44:49.122805 1219687 retry.go:31] will retry after 2.772733593s: waiting for domain to come up
	I0407 13:44:51.897854 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:51.898269 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | unable to find current IP address of domain newest-cni-896794 in network mk-newest-cni-896794
	I0407 13:44:51.898320 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | I0407 13:44:51.898253 1219687 retry.go:31] will retry after 4.726696979s: waiting for domain to come up
	I0407 13:44:56.629765 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:56.630526 1219664 main.go:141] libmachine: (newest-cni-896794) found domain IP: 192.168.61.209
	I0407 13:44:56.630557 1219664 main.go:141] libmachine: (newest-cni-896794) reserving static IP address...
	I0407 13:44:56.630567 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has current primary IP address 192.168.61.209 and MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:56.631195 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | unable to find host DHCP lease matching {name: "newest-cni-896794", mac: "52:54:00:5c:76:4b", ip: "192.168.61.209"} in network mk-newest-cni-896794
	I0407 13:44:56.741113 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | Getting to WaitForSSH function...
	I0407 13:44:56.741144 1219664 main.go:141] libmachine: (newest-cni-896794) reserved static IP address 192.168.61.209 for domain newest-cni-896794
	I0407 13:44:56.741158 1219664 main.go:141] libmachine: (newest-cni-896794) waiting for SSH...
	I0407 13:44:56.744203 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:56.744796 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:76:4b", ip: ""} in network mk-newest-cni-896794: {Iface:virbr3 ExpiryTime:2025-04-07 14:44:48 +0000 UTC Type:0 Mac:52:54:00:5c:76:4b Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5c:76:4b}
	I0407 13:44:56.744829 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined IP address 192.168.61.209 and MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:56.745017 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | Using SSH client type: external
	I0407 13:44:56.745042 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | Using SSH private key: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/newest-cni-896794/id_rsa (-rw-------)
	I0407 13:44:56.745073 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.209 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/newest-cni-896794/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0407 13:44:56.745093 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | About to run SSH command:
	I0407 13:44:56.745110 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | exit 0
	I0407 13:44:56.865842 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | SSH cmd err, output: <nil>: 
	I0407 13:44:56.866191 1219664 main.go:141] libmachine: (newest-cni-896794) KVM machine creation complete
	I0407 13:44:56.866508 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetConfigRaw
	I0407 13:44:56.867118 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .DriverName
	I0407 13:44:56.867361 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .DriverName
	I0407 13:44:56.867553 1219664 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0407 13:44:56.867572 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetState
	I0407 13:44:56.869038 1219664 main.go:141] libmachine: Detecting operating system of created instance...
	I0407 13:44:56.869054 1219664 main.go:141] libmachine: Waiting for SSH to be available...
	I0407 13:44:56.869062 1219664 main.go:141] libmachine: Getting to WaitForSSH function...
	I0407 13:44:56.869071 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHHostname
	I0407 13:44:56.872062 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:56.872549 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:76:4b", ip: ""} in network mk-newest-cni-896794: {Iface:virbr3 ExpiryTime:2025-04-07 14:44:48 +0000 UTC Type:0 Mac:52:54:00:5c:76:4b Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-896794 Clientid:01:52:54:00:5c:76:4b}
	I0407 13:44:56.872581 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined IP address 192.168.61.209 and MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:56.872753 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHPort
	I0407 13:44:56.872965 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHKeyPath
	I0407 13:44:56.873110 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHKeyPath
	I0407 13:44:56.873274 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHUsername
	I0407 13:44:56.873517 1219664 main.go:141] libmachine: Using SSH client type: native
	I0407 13:44:56.873918 1219664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.209 22 <nil> <nil>}
	I0407 13:44:56.873936 1219664 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0407 13:44:56.969172 1219664 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 13:44:56.969206 1219664 main.go:141] libmachine: Detecting the provisioner...
	I0407 13:44:56.969214 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHHostname
	I0407 13:44:56.972425 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:56.972866 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:76:4b", ip: ""} in network mk-newest-cni-896794: {Iface:virbr3 ExpiryTime:2025-04-07 14:44:48 +0000 UTC Type:0 Mac:52:54:00:5c:76:4b Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-896794 Clientid:01:52:54:00:5c:76:4b}
	I0407 13:44:56.972897 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined IP address 192.168.61.209 and MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:56.973030 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHPort
	I0407 13:44:56.973248 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHKeyPath
	I0407 13:44:56.973419 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHKeyPath
	I0407 13:44:56.973547 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHUsername
	I0407 13:44:56.973759 1219664 main.go:141] libmachine: Using SSH client type: native
	I0407 13:44:56.973981 1219664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.209 22 <nil> <nil>}
	I0407 13:44:56.973996 1219664 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0407 13:44:57.075029 1219664 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0407 13:44:57.075106 1219664 main.go:141] libmachine: found compatible host: buildroot
	I0407 13:44:57.075113 1219664 main.go:141] libmachine: Provisioning with buildroot...
	I0407 13:44:57.075122 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetMachineName
	I0407 13:44:57.075430 1219664 buildroot.go:166] provisioning hostname "newest-cni-896794"
	I0407 13:44:57.075460 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetMachineName
	I0407 13:44:57.075676 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHHostname
	I0407 13:44:57.079033 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:57.079425 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:76:4b", ip: ""} in network mk-newest-cni-896794: {Iface:virbr3 ExpiryTime:2025-04-07 14:44:48 +0000 UTC Type:0 Mac:52:54:00:5c:76:4b Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-896794 Clientid:01:52:54:00:5c:76:4b}
	I0407 13:44:57.079459 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined IP address 192.168.61.209 and MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:57.079691 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHPort
	I0407 13:44:57.079933 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHKeyPath
	I0407 13:44:57.080112 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHKeyPath
	I0407 13:44:57.080268 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHUsername
	I0407 13:44:57.080488 1219664 main.go:141] libmachine: Using SSH client type: native
	I0407 13:44:57.080691 1219664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.209 22 <nil> <nil>}
	I0407 13:44:57.080705 1219664 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-896794 && echo "newest-cni-896794" | sudo tee /etc/hostname
	I0407 13:44:57.203707 1219664 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-896794
	
	I0407 13:44:57.203737 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHHostname
	I0407 13:44:57.206860 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:57.207388 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:76:4b", ip: ""} in network mk-newest-cni-896794: {Iface:virbr3 ExpiryTime:2025-04-07 14:44:48 +0000 UTC Type:0 Mac:52:54:00:5c:76:4b Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-896794 Clientid:01:52:54:00:5c:76:4b}
	I0407 13:44:57.207423 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined IP address 192.168.61.209 and MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:57.207683 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHPort
	I0407 13:44:57.207899 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHKeyPath
	I0407 13:44:57.208173 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHKeyPath
	I0407 13:44:57.208374 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHUsername
	I0407 13:44:57.208578 1219664 main.go:141] libmachine: Using SSH client type: native
	I0407 13:44:57.208862 1219664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.209 22 <nil> <nil>}
	I0407 13:44:57.208890 1219664 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-896794' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-896794/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-896794' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 13:44:57.314849 1219664 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 13:44:57.314887 1219664 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20602-1162386/.minikube CaCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20602-1162386/.minikube}
	I0407 13:44:57.314928 1219664 buildroot.go:174] setting up certificates
	I0407 13:44:57.314959 1219664 provision.go:84] configureAuth start
	I0407 13:44:57.314980 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetMachineName
	I0407 13:44:57.315307 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetIP
	I0407 13:44:57.318429 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:57.318884 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:76:4b", ip: ""} in network mk-newest-cni-896794: {Iface:virbr3 ExpiryTime:2025-04-07 14:44:48 +0000 UTC Type:0 Mac:52:54:00:5c:76:4b Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-896794 Clientid:01:52:54:00:5c:76:4b}
	I0407 13:44:57.318916 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined IP address 192.168.61.209 and MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:57.319136 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHHostname
	I0407 13:44:57.321601 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:57.322005 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:76:4b", ip: ""} in network mk-newest-cni-896794: {Iface:virbr3 ExpiryTime:2025-04-07 14:44:48 +0000 UTC Type:0 Mac:52:54:00:5c:76:4b Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-896794 Clientid:01:52:54:00:5c:76:4b}
	I0407 13:44:57.322032 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined IP address 192.168.61.209 and MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:57.322168 1219664 provision.go:143] copyHostCerts
	I0407 13:44:57.322242 1219664 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem, removing ...
	I0407 13:44:57.322255 1219664 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem
	I0407 13:44:57.322339 1219664 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem (1078 bytes)
	I0407 13:44:57.322463 1219664 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem, removing ...
	I0407 13:44:57.322475 1219664 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem
	I0407 13:44:57.322514 1219664 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem (1123 bytes)
	I0407 13:44:57.322616 1219664 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem, removing ...
	I0407 13:44:57.322626 1219664 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem
	I0407 13:44:57.322654 1219664 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem (1675 bytes)
	I0407 13:44:57.322720 1219664 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem org=jenkins.newest-cni-896794 san=[127.0.0.1 192.168.61.209 localhost minikube newest-cni-896794]
	I0407 13:44:57.364422 1219664 provision.go:177] copyRemoteCerts
	I0407 13:44:57.364492 1219664 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 13:44:57.364528 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHHostname
	I0407 13:44:57.367556 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:57.367948 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:76:4b", ip: ""} in network mk-newest-cni-896794: {Iface:virbr3 ExpiryTime:2025-04-07 14:44:48 +0000 UTC Type:0 Mac:52:54:00:5c:76:4b Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-896794 Clientid:01:52:54:00:5c:76:4b}
	I0407 13:44:57.367981 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined IP address 192.168.61.209 and MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:57.368212 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHPort
	I0407 13:44:57.368431 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHKeyPath
	I0407 13:44:57.368615 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHUsername
	I0407 13:44:57.368738 1219664 sshutil.go:53] new ssh client: &{IP:192.168.61.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/newest-cni-896794/id_rsa Username:docker}
	I0407 13:44:57.448583 1219664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0407 13:44:57.475992 1219664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 13:44:57.504480 1219664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0407 13:44:57.532200 1219664 provision.go:87] duration metric: took 217.222688ms to configureAuth
	I0407 13:44:57.532236 1219664 buildroot.go:189] setting minikube options for container-runtime
	I0407 13:44:57.532450 1219664 config.go:182] Loaded profile config "newest-cni-896794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:44:57.532554 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHHostname
	I0407 13:44:57.536553 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:57.536984 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:76:4b", ip: ""} in network mk-newest-cni-896794: {Iface:virbr3 ExpiryTime:2025-04-07 14:44:48 +0000 UTC Type:0 Mac:52:54:00:5c:76:4b Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-896794 Clientid:01:52:54:00:5c:76:4b}
	I0407 13:44:57.537013 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined IP address 192.168.61.209 and MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:57.537316 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHPort
	I0407 13:44:57.537555 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHKeyPath
	I0407 13:44:57.537777 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHKeyPath
	I0407 13:44:57.537939 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHUsername
	I0407 13:44:57.538153 1219664 main.go:141] libmachine: Using SSH client type: native
	I0407 13:44:57.538426 1219664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.209 22 <nil> <nil>}
	I0407 13:44:57.538447 1219664 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0407 13:44:58.014603 1219904 start.go:364] duration metric: took 12.229454117s to acquireMachinesLock for "kubernetes-upgrade-973925"
	I0407 13:44:58.014668 1219904 start.go:96] Skipping create...Using existing machine configuration
	I0407 13:44:58.014679 1219904 fix.go:54] fixHost starting: 
	I0407 13:44:58.015097 1219904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:44:58.015167 1219904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:44:58.034297 1219904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42609
	I0407 13:44:58.034848 1219904 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:44:58.035377 1219904 main.go:141] libmachine: Using API Version  1
	I0407 13:44:58.035404 1219904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:44:58.035751 1219904 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:44:58.035956 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .DriverName
	I0407 13:44:58.036096 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetState
	I0407 13:44:58.037992 1219904 fix.go:112] recreateIfNeeded on kubernetes-upgrade-973925: state=Running err=<nil>
	W0407 13:44:58.038016 1219904 fix.go:138] unexpected machine state, will restart: <nil>
	I0407 13:44:58.040163 1219904 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-973925" VM ...
	I0407 13:44:57.770477 1219664 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0407 13:44:57.770514 1219664 main.go:141] libmachine: Checking connection to Docker...
	I0407 13:44:57.770528 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetURL
	I0407 13:44:57.772201 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | using libvirt version 6000000
	I0407 13:44:57.775057 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:57.775441 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:76:4b", ip: ""} in network mk-newest-cni-896794: {Iface:virbr3 ExpiryTime:2025-04-07 14:44:48 +0000 UTC Type:0 Mac:52:54:00:5c:76:4b Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-896794 Clientid:01:52:54:00:5c:76:4b}
	I0407 13:44:57.775469 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined IP address 192.168.61.209 and MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:57.775706 1219664 main.go:141] libmachine: Docker is up and running!
	I0407 13:44:57.775724 1219664 main.go:141] libmachine: Reticulating splines...
	I0407 13:44:57.775742 1219664 client.go:171] duration metric: took 24.990527875s to LocalClient.Create
	I0407 13:44:57.775769 1219664 start.go:167] duration metric: took 24.990610981s to libmachine.API.Create "newest-cni-896794"
	I0407 13:44:57.775783 1219664 start.go:293] postStartSetup for "newest-cni-896794" (driver="kvm2")
	I0407 13:44:57.775796 1219664 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 13:44:57.775814 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .DriverName
	I0407 13:44:57.776143 1219664 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 13:44:57.776168 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHHostname
	I0407 13:44:57.778698 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:57.779171 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:76:4b", ip: ""} in network mk-newest-cni-896794: {Iface:virbr3 ExpiryTime:2025-04-07 14:44:48 +0000 UTC Type:0 Mac:52:54:00:5c:76:4b Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-896794 Clientid:01:52:54:00:5c:76:4b}
	I0407 13:44:57.779202 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined IP address 192.168.61.209 and MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:57.779422 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHPort
	I0407 13:44:57.779662 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHKeyPath
	I0407 13:44:57.779856 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHUsername
	I0407 13:44:57.780061 1219664 sshutil.go:53] new ssh client: &{IP:192.168.61.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/newest-cni-896794/id_rsa Username:docker}
	I0407 13:44:57.861921 1219664 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 13:44:57.867142 1219664 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 13:44:57.867182 1219664 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-1162386/.minikube/addons for local assets ...
	I0407 13:44:57.867260 1219664 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-1162386/.minikube/files for local assets ...
	I0407 13:44:57.867354 1219664 filesync.go:149] local asset: /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem -> 11697162.pem in /etc/ssl/certs
	I0407 13:44:57.867452 1219664 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 13:44:57.878793 1219664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem --> /etc/ssl/certs/11697162.pem (1708 bytes)
	I0407 13:44:57.905536 1219664 start.go:296] duration metric: took 129.73435ms for postStartSetup
	I0407 13:44:57.905594 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetConfigRaw
	I0407 13:44:57.906283 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetIP
	I0407 13:44:57.909563 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:57.910021 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:76:4b", ip: ""} in network mk-newest-cni-896794: {Iface:virbr3 ExpiryTime:2025-04-07 14:44:48 +0000 UTC Type:0 Mac:52:54:00:5c:76:4b Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-896794 Clientid:01:52:54:00:5c:76:4b}
	I0407 13:44:57.910053 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined IP address 192.168.61.209 and MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:57.910385 1219664 profile.go:143] Saving config to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/newest-cni-896794/config.json ...
	I0407 13:44:57.910605 1219664 start.go:128] duration metric: took 25.145211609s to createHost
	I0407 13:44:57.910631 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHHostname
	I0407 13:44:57.913530 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:57.913978 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:76:4b", ip: ""} in network mk-newest-cni-896794: {Iface:virbr3 ExpiryTime:2025-04-07 14:44:48 +0000 UTC Type:0 Mac:52:54:00:5c:76:4b Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-896794 Clientid:01:52:54:00:5c:76:4b}
	I0407 13:44:57.914004 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined IP address 192.168.61.209 and MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:57.914152 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHPort
	I0407 13:44:57.914394 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHKeyPath
	I0407 13:44:57.914583 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHKeyPath
	I0407 13:44:57.914749 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHUsername
	I0407 13:44:57.914933 1219664 main.go:141] libmachine: Using SSH client type: native
	I0407 13:44:57.915176 1219664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.209 22 <nil> <nil>}
	I0407 13:44:57.915190 1219664 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0407 13:44:58.014418 1219664 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744033497.988841405
	
	I0407 13:44:58.014442 1219664 fix.go:216] guest clock: 1744033497.988841405
	I0407 13:44:58.014450 1219664 fix.go:229] Guest: 2025-04-07 13:44:57.988841405 +0000 UTC Remote: 2025-04-07 13:44:57.91061789 +0000 UTC m=+25.279604305 (delta=78.223515ms)
	I0407 13:44:58.014474 1219664 fix.go:200] guest clock delta is within tolerance: 78.223515ms
	I0407 13:44:58.014479 1219664 start.go:83] releasing machines lock for "newest-cni-896794", held for 25.249197848s
	I0407 13:44:58.014504 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .DriverName
	I0407 13:44:58.014827 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetIP
	I0407 13:44:58.018888 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:58.019319 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:76:4b", ip: ""} in network mk-newest-cni-896794: {Iface:virbr3 ExpiryTime:2025-04-07 14:44:48 +0000 UTC Type:0 Mac:52:54:00:5c:76:4b Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-896794 Clientid:01:52:54:00:5c:76:4b}
	I0407 13:44:58.019351 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined IP address 192.168.61.209 and MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:58.019494 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .DriverName
	I0407 13:44:58.020154 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .DriverName
	I0407 13:44:58.020374 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .DriverName
	I0407 13:44:58.020465 1219664 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0407 13:44:58.020517 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHHostname
	I0407 13:44:58.020590 1219664 ssh_runner.go:195] Run: cat /version.json
	I0407 13:44:58.020614 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHHostname
	I0407 13:44:58.023462 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:58.023673 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:58.023916 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:76:4b", ip: ""} in network mk-newest-cni-896794: {Iface:virbr3 ExpiryTime:2025-04-07 14:44:48 +0000 UTC Type:0 Mac:52:54:00:5c:76:4b Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-896794 Clientid:01:52:54:00:5c:76:4b}
	I0407 13:44:58.023963 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined IP address 192.168.61.209 and MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:58.024223 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:76:4b", ip: ""} in network mk-newest-cni-896794: {Iface:virbr3 ExpiryTime:2025-04-07 14:44:48 +0000 UTC Type:0 Mac:52:54:00:5c:76:4b Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-896794 Clientid:01:52:54:00:5c:76:4b}
	I0407 13:44:58.024257 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined IP address 192.168.61.209 and MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:58.024293 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHPort
	I0407 13:44:58.024457 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHPort
	I0407 13:44:58.024579 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHKeyPath
	I0407 13:44:58.024655 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHKeyPath
	I0407 13:44:58.024771 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHUsername
	I0407 13:44:58.024852 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetSSHUsername
	I0407 13:44:58.024883 1219664 sshutil.go:53] new ssh client: &{IP:192.168.61.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/newest-cni-896794/id_rsa Username:docker}
	I0407 13:44:58.024991 1219664 sshutil.go:53] new ssh client: &{IP:192.168.61.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/newest-cni-896794/id_rsa Username:docker}
	I0407 13:44:58.103481 1219664 ssh_runner.go:195] Run: systemctl --version
	I0407 13:44:58.125244 1219664 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0407 13:44:58.292089 1219664 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0407 13:44:58.298102 1219664 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 13:44:58.298187 1219664 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 13:44:58.314961 1219664 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0407 13:44:58.314997 1219664 start.go:495] detecting cgroup driver to use...
	I0407 13:44:58.315080 1219664 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 13:44:58.337941 1219664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 13:44:58.355774 1219664 docker.go:217] disabling cri-docker service (if available) ...
	I0407 13:44:58.355856 1219664 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0407 13:44:58.371075 1219664 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0407 13:44:58.386994 1219664 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0407 13:44:58.514101 1219664 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0407 13:44:58.679228 1219664 docker.go:233] disabling docker service ...
	I0407 13:44:58.679332 1219664 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0407 13:44:58.695516 1219664 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0407 13:44:58.710982 1219664 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0407 13:44:58.857037 1219664 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0407 13:44:58.985684 1219664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0407 13:44:59.003662 1219664 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 13:44:59.025128 1219664 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0407 13:44:59.025197 1219664 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:44:59.037137 1219664 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0407 13:44:59.037229 1219664 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:44:59.049643 1219664 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:44:59.062115 1219664 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:44:59.075975 1219664 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 13:44:59.088853 1219664 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:44:59.101039 1219664 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:44:59.121119 1219664 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:44:59.134558 1219664 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 13:44:59.146585 1219664 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0407 13:44:59.146682 1219664 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0407 13:44:59.162053 1219664 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 13:44:59.171796 1219664 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:44:59.282907 1219664 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0407 13:44:59.379544 1219664 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0407 13:44:59.379627 1219664 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0407 13:44:59.384204 1219664 start.go:563] Will wait 60s for crictl version
	I0407 13:44:59.384290 1219664 ssh_runner.go:195] Run: which crictl
	I0407 13:44:59.387844 1219664 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 13:44:59.423145 1219664 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0407 13:44:59.423235 1219664 ssh_runner.go:195] Run: crio --version
	I0407 13:44:59.453826 1219664 ssh_runner.go:195] Run: crio --version
	I0407 13:44:59.486574 1219664 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0407 13:44:59.487781 1219664 main.go:141] libmachine: (newest-cni-896794) Calling .GetIP
	I0407 13:44:59.490755 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:59.491170 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:76:4b", ip: ""} in network mk-newest-cni-896794: {Iface:virbr3 ExpiryTime:2025-04-07 14:44:48 +0000 UTC Type:0 Mac:52:54:00:5c:76:4b Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-896794 Clientid:01:52:54:00:5c:76:4b}
	I0407 13:44:59.491203 1219664 main.go:141] libmachine: (newest-cni-896794) DBG | domain newest-cni-896794 has defined IP address 192.168.61.209 and MAC address 52:54:00:5c:76:4b in network mk-newest-cni-896794
	I0407 13:44:59.491472 1219664 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0407 13:44:59.495631 1219664 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:44:59.510229 1219664 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0407 13:44:58.041691 1219904 machine.go:93] provisionDockerMachine start ...
	I0407 13:44:58.041751 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .DriverName
	I0407 13:44:58.042009 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHHostname
	I0407 13:44:58.045076 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:44:58.045558 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:eb:42", ip: ""} in network mk-kubernetes-upgrade-973925: {Iface:virbr2 ExpiryTime:2025-04-07 14:44:16 +0000 UTC Type:0 Mac:52:54:00:5e:eb:42 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:kubernetes-upgrade-973925 Clientid:01:52:54:00:5e:eb:42}
	I0407 13:44:58.045596 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined IP address 192.168.50.245 and MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:44:58.045788 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHPort
	I0407 13:44:58.046012 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHKeyPath
	I0407 13:44:58.046190 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHKeyPath
	I0407 13:44:58.046345 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHUsername
	I0407 13:44:58.046524 1219904 main.go:141] libmachine: Using SSH client type: native
	I0407 13:44:58.046744 1219904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0407 13:44:58.046758 1219904 main.go:141] libmachine: About to run SSH command:
	hostname
	I0407 13:44:58.154943 1219904 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-973925
	
	I0407 13:44:58.155005 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetMachineName
	I0407 13:44:58.155337 1219904 buildroot.go:166] provisioning hostname "kubernetes-upgrade-973925"
	I0407 13:44:58.155370 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetMachineName
	I0407 13:44:58.155632 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHHostname
	I0407 13:44:58.158684 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:44:58.159067 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:eb:42", ip: ""} in network mk-kubernetes-upgrade-973925: {Iface:virbr2 ExpiryTime:2025-04-07 14:44:16 +0000 UTC Type:0 Mac:52:54:00:5e:eb:42 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:kubernetes-upgrade-973925 Clientid:01:52:54:00:5e:eb:42}
	I0407 13:44:58.159099 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined IP address 192.168.50.245 and MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:44:58.159393 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHPort
	I0407 13:44:58.159671 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHKeyPath
	I0407 13:44:58.159906 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHKeyPath
	I0407 13:44:58.160194 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHUsername
	I0407 13:44:58.160543 1219904 main.go:141] libmachine: Using SSH client type: native
	I0407 13:44:58.160789 1219904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0407 13:44:58.160802 1219904 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-973925 && echo "kubernetes-upgrade-973925" | sudo tee /etc/hostname
	I0407 13:44:58.285248 1219904 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-973925
	
	I0407 13:44:58.285281 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHHostname
	I0407 13:44:58.288249 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:44:58.288625 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:eb:42", ip: ""} in network mk-kubernetes-upgrade-973925: {Iface:virbr2 ExpiryTime:2025-04-07 14:44:16 +0000 UTC Type:0 Mac:52:54:00:5e:eb:42 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:kubernetes-upgrade-973925 Clientid:01:52:54:00:5e:eb:42}
	I0407 13:44:58.288658 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined IP address 192.168.50.245 and MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:44:58.288897 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHPort
	I0407 13:44:58.289139 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHKeyPath
	I0407 13:44:58.289325 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHKeyPath
	I0407 13:44:58.289517 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHUsername
	I0407 13:44:58.289752 1219904 main.go:141] libmachine: Using SSH client type: native
	I0407 13:44:58.290022 1219904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0407 13:44:58.290050 1219904 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-973925' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-973925/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-973925' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 13:44:58.399743 1219904 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 13:44:58.399783 1219904 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20602-1162386/.minikube CaCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20602-1162386/.minikube}
	I0407 13:44:58.399834 1219904 buildroot.go:174] setting up certificates
	I0407 13:44:58.399848 1219904 provision.go:84] configureAuth start
	I0407 13:44:58.399867 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetMachineName
	I0407 13:44:58.400242 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetIP
	I0407 13:44:58.402979 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:44:58.403341 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:eb:42", ip: ""} in network mk-kubernetes-upgrade-973925: {Iface:virbr2 ExpiryTime:2025-04-07 14:44:16 +0000 UTC Type:0 Mac:52:54:00:5e:eb:42 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:kubernetes-upgrade-973925 Clientid:01:52:54:00:5e:eb:42}
	I0407 13:44:58.403378 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined IP address 192.168.50.245 and MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:44:58.403574 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHHostname
	I0407 13:44:58.406431 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:44:58.406999 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:eb:42", ip: ""} in network mk-kubernetes-upgrade-973925: {Iface:virbr2 ExpiryTime:2025-04-07 14:44:16 +0000 UTC Type:0 Mac:52:54:00:5e:eb:42 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:kubernetes-upgrade-973925 Clientid:01:52:54:00:5e:eb:42}
	I0407 13:44:58.407046 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined IP address 192.168.50.245 and MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:44:58.407197 1219904 provision.go:143] copyHostCerts
	I0407 13:44:58.407268 1219904 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem, removing ...
	I0407 13:44:58.407292 1219904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem
	I0407 13:44:58.407366 1219904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem (1078 bytes)
	I0407 13:44:58.407964 1219904 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem, removing ...
	I0407 13:44:58.408019 1219904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem
	I0407 13:44:58.408079 1219904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem (1123 bytes)
	I0407 13:44:58.408218 1219904 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem, removing ...
	I0407 13:44:58.408239 1219904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem
	I0407 13:44:58.408279 1219904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem (1675 bytes)
	I0407 13:44:58.408382 1219904 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-973925 san=[127.0.0.1 192.168.50.245 kubernetes-upgrade-973925 localhost minikube]
	I0407 13:44:58.609358 1219904 provision.go:177] copyRemoteCerts
	I0407 13:44:58.609429 1219904 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 13:44:58.609461 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHHostname
	I0407 13:44:58.612929 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:44:58.613416 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:eb:42", ip: ""} in network mk-kubernetes-upgrade-973925: {Iface:virbr2 ExpiryTime:2025-04-07 14:44:16 +0000 UTC Type:0 Mac:52:54:00:5e:eb:42 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:kubernetes-upgrade-973925 Clientid:01:52:54:00:5e:eb:42}
	I0407 13:44:58.613453 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined IP address 192.168.50.245 and MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:44:58.613849 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHPort
	I0407 13:44:58.614132 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHKeyPath
	I0407 13:44:58.614357 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHUsername
	I0407 13:44:58.614522 1219904 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/kubernetes-upgrade-973925/id_rsa Username:docker}
	I0407 13:44:58.702019 1219904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 13:44:58.736211 1219904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0407 13:44:58.764127 1219904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0407 13:44:58.792791 1219904 provision.go:87] duration metric: took 392.924168ms to configureAuth
	I0407 13:44:58.792826 1219904 buildroot.go:189] setting minikube options for container-runtime
	I0407 13:44:58.793061 1219904 config.go:182] Loaded profile config "kubernetes-upgrade-973925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:44:58.793177 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHHostname
	I0407 13:44:58.796339 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:44:58.796782 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:eb:42", ip: ""} in network mk-kubernetes-upgrade-973925: {Iface:virbr2 ExpiryTime:2025-04-07 14:44:16 +0000 UTC Type:0 Mac:52:54:00:5e:eb:42 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:kubernetes-upgrade-973925 Clientid:01:52:54:00:5e:eb:42}
	I0407 13:44:58.796818 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined IP address 192.168.50.245 and MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:44:58.797108 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHPort
	I0407 13:44:58.797369 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHKeyPath
	I0407 13:44:58.797582 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHKeyPath
	I0407 13:44:58.797794 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHUsername
	I0407 13:44:58.797967 1219904 main.go:141] libmachine: Using SSH client type: native
	I0407 13:44:58.798218 1219904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0407 13:44:58.798239 1219904 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0407 13:44:59.511758 1219664 kubeadm.go:883] updating cluster {Name:newest-cni-896794 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-8
96794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.209 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 13:44:59.511897 1219664 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 13:44:59.511962 1219664 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 13:44:59.545551 1219664 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0407 13:44:59.545623 1219664 ssh_runner.go:195] Run: which lz4
	I0407 13:44:59.550213 1219664 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0407 13:44:59.555043 1219664 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0407 13:44:59.555100 1219664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0407 13:45:00.954147 1219664 crio.go:462] duration metric: took 1.403980444s to copy over tarball
	I0407 13:45:00.954258 1219664 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0407 13:45:05.553237 1219904 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0407 13:45:05.553279 1219904 machine.go:96] duration metric: took 7.511541219s to provisionDockerMachine
	I0407 13:45:05.553295 1219904 start.go:293] postStartSetup for "kubernetes-upgrade-973925" (driver="kvm2")
	I0407 13:45:05.553310 1219904 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 13:45:05.553335 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .DriverName
	I0407 13:45:05.553832 1219904 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 13:45:05.553881 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHHostname
	I0407 13:45:05.559511 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:45:05.560154 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:eb:42", ip: ""} in network mk-kubernetes-upgrade-973925: {Iface:virbr2 ExpiryTime:2025-04-07 14:44:16 +0000 UTC Type:0 Mac:52:54:00:5e:eb:42 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:kubernetes-upgrade-973925 Clientid:01:52:54:00:5e:eb:42}
	I0407 13:45:05.560193 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined IP address 192.168.50.245 and MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:45:05.560508 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHPort
	I0407 13:45:05.560878 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHKeyPath
	I0407 13:45:05.561208 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHUsername
	I0407 13:45:05.561494 1219904 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/kubernetes-upgrade-973925/id_rsa Username:docker}
	I0407 13:45:03.216081 1219664 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.261793416s)
	I0407 13:45:03.216112 1219664 crio.go:469] duration metric: took 2.261928794s to extract the tarball
	I0407 13:45:03.216120 1219664 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0407 13:45:03.255063 1219664 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 13:45:03.302790 1219664 crio.go:514] all images are preloaded for cri-o runtime.
	I0407 13:45:03.302825 1219664 cache_images.go:84] Images are preloaded, skipping loading
	I0407 13:45:03.302837 1219664 kubeadm.go:934] updating node { 192.168.61.209 8443 v1.32.2 crio true true} ...
	I0407 13:45:03.302974 1219664 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-896794 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.209
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:newest-cni-896794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 13:45:03.303053 1219664 ssh_runner.go:195] Run: crio config
	I0407 13:45:03.348361 1219664 cni.go:84] Creating CNI manager for ""
	I0407 13:45:03.348392 1219664 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 13:45:03.348409 1219664 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0407 13:45:03.348449 1219664 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.209 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-896794 NodeName:newest-cni-896794 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.209"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.209 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0407 13:45:03.348583 1219664 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.209
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-896794"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.209"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.209"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 13:45:03.348648 1219664 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0407 13:45:03.359232 1219664 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 13:45:03.359328 1219664 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 13:45:03.369634 1219664 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0407 13:45:03.387630 1219664 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 13:45:03.410539 1219664 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I0407 13:45:03.432337 1219664 ssh_runner.go:195] Run: grep 192.168.61.209	control-plane.minikube.internal$ /etc/hosts
	I0407 13:45:03.437502 1219664 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.209	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:45:03.451485 1219664 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:45:03.591850 1219664 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:45:03.611619 1219664 certs.go:68] Setting up /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/newest-cni-896794 for IP: 192.168.61.209
	I0407 13:45:03.611648 1219664 certs.go:194] generating shared ca certs ...
	I0407 13:45:03.611670 1219664 certs.go:226] acquiring lock for ca certs: {Name:mk8e89191fca7f2111bdd08c345368f593b0d5d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:45:03.611894 1219664 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.key
	I0407 13:45:03.611950 1219664 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.key
	I0407 13:45:03.611965 1219664 certs.go:256] generating profile certs ...
	I0407 13:45:03.612045 1219664 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/newest-cni-896794/client.key
	I0407 13:45:03.612067 1219664 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/newest-cni-896794/client.crt with IP's: []
	I0407 13:45:03.857655 1219664 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/newest-cni-896794/client.crt ...
	I0407 13:45:03.857702 1219664 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/newest-cni-896794/client.crt: {Name:mk6704a7d423c86a031919c5fbf084986133637d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:45:03.857942 1219664 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/newest-cni-896794/client.key ...
	I0407 13:45:03.857951 1219664 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/newest-cni-896794/client.key: {Name:mk74b4bd050bf2d54e67bb77e1b1cb3da5387f7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:45:03.858026 1219664 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/newest-cni-896794/apiserver.key.eeeaaed1
	I0407 13:45:03.858037 1219664 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/newest-cni-896794/apiserver.crt.eeeaaed1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.209]
	I0407 13:45:03.919481 1219664 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/newest-cni-896794/apiserver.crt.eeeaaed1 ...
	I0407 13:45:03.919530 1219664 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/newest-cni-896794/apiserver.crt.eeeaaed1: {Name:mk4725810a361bea9b03a8c751b2cb744c670293 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:45:03.919846 1219664 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/newest-cni-896794/apiserver.key.eeeaaed1 ...
	I0407 13:45:03.919876 1219664 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/newest-cni-896794/apiserver.key.eeeaaed1: {Name:mk160529eb7a49aadcfe45d18279c91d67383af5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:45:03.919989 1219664 certs.go:381] copying /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/newest-cni-896794/apiserver.crt.eeeaaed1 -> /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/newest-cni-896794/apiserver.crt
	I0407 13:45:03.920083 1219664 certs.go:385] copying /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/newest-cni-896794/apiserver.key.eeeaaed1 -> /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/newest-cni-896794/apiserver.key
	I0407 13:45:03.920153 1219664 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/newest-cni-896794/proxy-client.key
	I0407 13:45:03.920172 1219664 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/newest-cni-896794/proxy-client.crt with IP's: []
	I0407 13:45:04.163622 1219664 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/newest-cni-896794/proxy-client.crt ...
	I0407 13:45:04.163658 1219664 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/newest-cni-896794/proxy-client.crt: {Name:mk5510b407c2babe1a70994c3148527d2282397d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:45:04.163841 1219664 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/newest-cni-896794/proxy-client.key ...
	I0407 13:45:04.163859 1219664 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/newest-cni-896794/proxy-client.key: {Name:mk0c1b32f2b6fb9b464d2db842013005bcd889df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:45:04.164100 1219664 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716.pem (1338 bytes)
	W0407 13:45:04.164155 1219664 certs.go:480] ignoring /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716_empty.pem, impossibly tiny 0 bytes
	I0407 13:45:04.164168 1219664 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem (1675 bytes)
	I0407 13:45:04.164199 1219664 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem (1078 bytes)
	I0407 13:45:04.164224 1219664 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem (1123 bytes)
	I0407 13:45:04.164245 1219664 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem (1675 bytes)
	I0407 13:45:04.164285 1219664 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem (1708 bytes)
	I0407 13:45:04.164926 1219664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 13:45:04.194484 1219664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 13:45:04.223916 1219664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 13:45:04.255492 1219664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0407 13:45:04.284276 1219664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/newest-cni-896794/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0407 13:45:04.312282 1219664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/newest-cni-896794/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0407 13:45:04.341581 1219664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/newest-cni-896794/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 13:45:04.371423 1219664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/newest-cni-896794/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0407 13:45:04.411585 1219664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 13:45:04.449388 1219664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716.pem --> /usr/share/ca-certificates/1169716.pem (1338 bytes)
	I0407 13:45:04.489442 1219664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem --> /usr/share/ca-certificates/11697162.pem (1708 bytes)
	I0407 13:45:04.519689 1219664 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 13:45:04.540212 1219664 ssh_runner.go:195] Run: openssl version
	I0407 13:45:04.546704 1219664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1169716.pem && ln -fs /usr/share/ca-certificates/1169716.pem /etc/ssl/certs/1169716.pem"
	I0407 13:45:04.559456 1219664 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1169716.pem
	I0407 13:45:04.565578 1219664 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 12:22 /usr/share/ca-certificates/1169716.pem
	I0407 13:45:04.565676 1219664 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1169716.pem
	I0407 13:45:04.572435 1219664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1169716.pem /etc/ssl/certs/51391683.0"
	I0407 13:45:04.585751 1219664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11697162.pem && ln -fs /usr/share/ca-certificates/11697162.pem /etc/ssl/certs/11697162.pem"
	I0407 13:45:04.600054 1219664 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11697162.pem
	I0407 13:45:04.607102 1219664 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 12:22 /usr/share/ca-certificates/11697162.pem
	I0407 13:45:04.607193 1219664 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11697162.pem
	I0407 13:45:04.613874 1219664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11697162.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 13:45:04.626144 1219664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 13:45:04.638650 1219664 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:45:04.644805 1219664 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:14 /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:45:04.644902 1219664 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:45:04.652219 1219664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 13:45:04.666740 1219664 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 13:45:04.672655 1219664 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0407 13:45:04.672722 1219664 kubeadm.go:392] StartCluster: {Name:newest-cni-896794 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-8967
94 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.209 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:45:04.672811 1219664 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0407 13:45:04.672881 1219664 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0407 13:45:04.723483 1219664 cri.go:89] found id: ""
	I0407 13:45:04.723562 1219664 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0407 13:45:04.737845 1219664 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0407 13:45:04.749908 1219664 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 13:45:04.764226 1219664 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 13:45:04.764268 1219664 kubeadm.go:157] found existing configuration files:
	
	I0407 13:45:04.764335 1219664 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 13:45:04.775570 1219664 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 13:45:04.775669 1219664 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 13:45:04.791048 1219664 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 13:45:04.805990 1219664 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 13:45:04.806097 1219664 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 13:45:04.824920 1219664 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 13:45:04.839905 1219664 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 13:45:04.840007 1219664 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 13:45:04.855263 1219664 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 13:45:04.871360 1219664 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 13:45:04.871535 1219664 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 13:45:04.885687 1219664 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0407 13:45:05.270029 1219664 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0407 13:45:05.664661 1219904 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 13:45:05.675553 1219904 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 13:45:05.675600 1219904 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-1162386/.minikube/addons for local assets ...
	I0407 13:45:05.675704 1219904 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-1162386/.minikube/files for local assets ...
	I0407 13:45:05.675830 1219904 filesync.go:149] local asset: /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem -> 11697162.pem in /etc/ssl/certs
	I0407 13:45:05.675988 1219904 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 13:45:05.725079 1219904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem --> /etc/ssl/certs/11697162.pem (1708 bytes)
	I0407 13:45:05.795422 1219904 start.go:296] duration metric: took 242.107239ms for postStartSetup
	I0407 13:45:05.795477 1219904 fix.go:56] duration metric: took 7.780797267s for fixHost
	I0407 13:45:05.795508 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHHostname
	I0407 13:45:05.799540 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:45:05.800282 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:eb:42", ip: ""} in network mk-kubernetes-upgrade-973925: {Iface:virbr2 ExpiryTime:2025-04-07 14:44:16 +0000 UTC Type:0 Mac:52:54:00:5e:eb:42 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:kubernetes-upgrade-973925 Clientid:01:52:54:00:5e:eb:42}
	I0407 13:45:05.800318 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined IP address 192.168.50.245 and MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:45:05.800652 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHPort
	I0407 13:45:05.800876 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHKeyPath
	I0407 13:45:05.801248 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHKeyPath
	I0407 13:45:05.801531 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHUsername
	I0407 13:45:05.801928 1219904 main.go:141] libmachine: Using SSH client type: native
	I0407 13:45:05.802233 1219904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0407 13:45:05.802252 1219904 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0407 13:45:05.932044 1219904 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744033505.927024753
	
	I0407 13:45:05.932072 1219904 fix.go:216] guest clock: 1744033505.927024753
	I0407 13:45:05.932083 1219904 fix.go:229] Guest: 2025-04-07 13:45:05.927024753 +0000 UTC Remote: 2025-04-07 13:45:05.795482698 +0000 UTC m=+20.205653075 (delta=131.542055ms)
	I0407 13:45:05.932149 1219904 fix.go:200] guest clock delta is within tolerance: 131.542055ms
	I0407 13:45:05.932156 1219904 start.go:83] releasing machines lock for "kubernetes-upgrade-973925", held for 7.917519775s
	I0407 13:45:05.932180 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .DriverName
	I0407 13:45:05.932512 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetIP
	I0407 13:45:05.936186 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:45:05.936767 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:eb:42", ip: ""} in network mk-kubernetes-upgrade-973925: {Iface:virbr2 ExpiryTime:2025-04-07 14:44:16 +0000 UTC Type:0 Mac:52:54:00:5e:eb:42 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:kubernetes-upgrade-973925 Clientid:01:52:54:00:5e:eb:42}
	I0407 13:45:05.936819 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined IP address 192.168.50.245 and MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:45:05.937137 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .DriverName
	I0407 13:45:05.938158 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .DriverName
	I0407 13:45:05.938649 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .DriverName
	I0407 13:45:05.938761 1219904 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0407 13:45:05.938821 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHHostname
	I0407 13:45:05.939052 1219904 ssh_runner.go:195] Run: cat /version.json
	I0407 13:45:05.939150 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHHostname
	I0407 13:45:05.942826 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:45:05.942884 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:45:05.943562 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:eb:42", ip: ""} in network mk-kubernetes-upgrade-973925: {Iface:virbr2 ExpiryTime:2025-04-07 14:44:16 +0000 UTC Type:0 Mac:52:54:00:5e:eb:42 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:kubernetes-upgrade-973925 Clientid:01:52:54:00:5e:eb:42}
	I0407 13:45:05.943615 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:eb:42", ip: ""} in network mk-kubernetes-upgrade-973925: {Iface:virbr2 ExpiryTime:2025-04-07 14:44:16 +0000 UTC Type:0 Mac:52:54:00:5e:eb:42 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:kubernetes-upgrade-973925 Clientid:01:52:54:00:5e:eb:42}
	I0407 13:45:05.943641 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined IP address 192.168.50.245 and MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:45:05.943662 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined IP address 192.168.50.245 and MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:45:05.943840 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHPort
	I0407 13:45:05.944237 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHPort
	I0407 13:45:05.944242 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHKeyPath
	I0407 13:45:05.944460 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHUsername
	I0407 13:45:05.944546 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHKeyPath
	I0407 13:45:05.944650 1219904 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/kubernetes-upgrade-973925/id_rsa Username:docker}
	I0407 13:45:05.944739 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetSSHUsername
	I0407 13:45:05.944847 1219904 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/kubernetes-upgrade-973925/id_rsa Username:docker}
	I0407 13:45:06.054248 1219904 ssh_runner.go:195] Run: systemctl --version
	I0407 13:45:06.060879 1219904 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0407 13:45:06.236402 1219904 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0407 13:45:06.245164 1219904 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 13:45:06.245259 1219904 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 13:45:06.255757 1219904 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0407 13:45:06.255796 1219904 start.go:495] detecting cgroup driver to use...
	I0407 13:45:06.255870 1219904 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 13:45:06.276311 1219904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 13:45:06.294137 1219904 docker.go:217] disabling cri-docker service (if available) ...
	I0407 13:45:06.294209 1219904 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0407 13:45:06.312801 1219904 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0407 13:45:06.329530 1219904 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0407 13:45:06.510801 1219904 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0407 13:45:06.731375 1219904 docker.go:233] disabling docker service ...
	I0407 13:45:06.731504 1219904 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0407 13:45:06.920099 1219904 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0407 13:45:07.020193 1219904 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0407 13:45:07.253095 1219904 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0407 13:45:07.570700 1219904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0407 13:45:07.597458 1219904 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 13:45:07.689727 1219904 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0407 13:45:07.689815 1219904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:45:07.721598 1219904 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0407 13:45:07.721697 1219904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:45:07.745826 1219904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:45:07.817193 1219904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:45:07.855822 1219904 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 13:45:07.869187 1219904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:45:07.881478 1219904 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:45:07.895750 1219904 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:45:07.915634 1219904 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 13:45:07.935867 1219904 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 13:45:07.956638 1219904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:45:08.283042 1219904 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0407 13:45:09.028060 1219904 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0407 13:45:09.028147 1219904 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0407 13:45:09.033131 1219904 start.go:563] Will wait 60s for crictl version
	I0407 13:45:09.033204 1219904 ssh_runner.go:195] Run: which crictl
	I0407 13:45:09.037238 1219904 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 13:45:09.081861 1219904 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0407 13:45:09.081945 1219904 ssh_runner.go:195] Run: crio --version
	I0407 13:45:09.114782 1219904 ssh_runner.go:195] Run: crio --version
	I0407 13:45:09.156316 1219904 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0407 13:45:09.158024 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) Calling .GetIP
	I0407 13:45:09.161854 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:45:09.162326 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:eb:42", ip: ""} in network mk-kubernetes-upgrade-973925: {Iface:virbr2 ExpiryTime:2025-04-07 14:44:16 +0000 UTC Type:0 Mac:52:54:00:5e:eb:42 Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:kubernetes-upgrade-973925 Clientid:01:52:54:00:5e:eb:42}
	I0407 13:45:09.162364 1219904 main.go:141] libmachine: (kubernetes-upgrade-973925) DBG | domain kubernetes-upgrade-973925 has defined IP address 192.168.50.245 and MAC address 52:54:00:5e:eb:42 in network mk-kubernetes-upgrade-973925
	I0407 13:45:09.162559 1219904 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0407 13:45:09.167199 1219904 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-973925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kube
rnetes-upgrade-973925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 13:45:09.167356 1219904 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 13:45:09.167425 1219904 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 13:45:09.217965 1219904 crio.go:514] all images are preloaded for cri-o runtime.
	I0407 13:45:09.217991 1219904 crio.go:433] Images already preloaded, skipping extraction
	I0407 13:45:09.218051 1219904 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 13:45:09.263124 1219904 crio.go:514] all images are preloaded for cri-o runtime.
	I0407 13:45:09.263150 1219904 cache_images.go:84] Images are preloaded, skipping loading
	I0407 13:45:09.263158 1219904 kubeadm.go:934] updating node { 192.168.50.245 8443 v1.32.2 crio true true} ...
	I0407 13:45:09.263264 1219904 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-973925 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:kubernetes-upgrade-973925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 13:45:09.263361 1219904 ssh_runner.go:195] Run: crio config
	I0407 13:45:09.324045 1219904 cni.go:84] Creating CNI manager for ""
	I0407 13:45:09.324079 1219904 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 13:45:09.324094 1219904 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0407 13:45:09.324124 1219904 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.245 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-973925 NodeName:kubernetes-upgrade-973925 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0407 13:45:09.324257 1219904 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-973925"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.245"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.245"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 13:45:09.324388 1219904 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0407 13:45:09.338145 1219904 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 13:45:09.338235 1219904 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 13:45:09.351248 1219904 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0407 13:45:09.372741 1219904 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 13:45:09.394098 1219904 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0407 13:45:09.417697 1219904 ssh_runner.go:195] Run: grep 192.168.50.245	control-plane.minikube.internal$ /etc/hosts
	I0407 13:45:09.422129 1219904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:45:09.569716 1219904 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:45:09.585292 1219904 certs.go:68] Setting up /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925 for IP: 192.168.50.245
	I0407 13:45:09.585326 1219904 certs.go:194] generating shared ca certs ...
	I0407 13:45:09.585349 1219904 certs.go:226] acquiring lock for ca certs: {Name:mk8e89191fca7f2111bdd08c345368f593b0d5d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:45:09.585547 1219904 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.key
	I0407 13:45:09.585613 1219904 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.key
	I0407 13:45:09.585627 1219904 certs.go:256] generating profile certs ...
	I0407 13:45:09.585747 1219904 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925/client.key
	I0407 13:45:09.585802 1219904 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925/apiserver.key.dcf03a85
	I0407 13:45:09.585837 1219904 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925/proxy-client.key
	I0407 13:45:09.585954 1219904 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716.pem (1338 bytes)
	W0407 13:45:09.585990 1219904 certs.go:480] ignoring /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716_empty.pem, impossibly tiny 0 bytes
	I0407 13:45:09.586003 1219904 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem (1675 bytes)
	I0407 13:45:09.586023 1219904 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem (1078 bytes)
	I0407 13:45:09.586045 1219904 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem (1123 bytes)
	I0407 13:45:09.586067 1219904 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem (1675 bytes)
	I0407 13:45:09.586109 1219904 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem (1708 bytes)
	I0407 13:45:09.586689 1219904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 13:45:09.612305 1219904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 13:45:09.638808 1219904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 13:45:09.664389 1219904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0407 13:45:09.690393 1219904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0407 13:45:09.716816 1219904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0407 13:45:09.744278 1219904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 13:45:09.770205 1219904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kubernetes-upgrade-973925/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0407 13:45:09.796613 1219904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 13:45:09.822757 1219904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716.pem --> /usr/share/ca-certificates/1169716.pem (1338 bytes)
	I0407 13:45:09.894220 1219904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem --> /usr/share/ca-certificates/11697162.pem (1708 bytes)
	I0407 13:45:09.948308 1219904 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 13:45:10.064743 1219904 ssh_runner.go:195] Run: openssl version
	I0407 13:45:10.089133 1219904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 13:45:10.176206 1219904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:45:10.214825 1219904 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:14 /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:45:10.214921 1219904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:45:10.306498 1219904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 13:45:10.354037 1219904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1169716.pem && ln -fs /usr/share/ca-certificates/1169716.pem /etc/ssl/certs/1169716.pem"
	I0407 13:45:10.411301 1219904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1169716.pem
	I0407 13:45:10.427294 1219904 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 12:22 /usr/share/ca-certificates/1169716.pem
	I0407 13:45:10.427372 1219904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1169716.pem
	I0407 13:45:10.434009 1219904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1169716.pem /etc/ssl/certs/51391683.0"
	I0407 13:45:10.484048 1219904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11697162.pem && ln -fs /usr/share/ca-certificates/11697162.pem /etc/ssl/certs/11697162.pem"
	I0407 13:45:10.510102 1219904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11697162.pem
	I0407 13:45:10.524703 1219904 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 12:22 /usr/share/ca-certificates/11697162.pem
	I0407 13:45:10.524785 1219904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11697162.pem
	I0407 13:45:10.542185 1219904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11697162.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 13:45:10.570135 1219904 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 13:45:10.582106 1219904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0407 13:45:10.594151 1219904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0407 13:45:10.603175 1219904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0407 13:45:10.619931 1219904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0407 13:45:10.638463 1219904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0407 13:45:15.454149 1219664 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0407 13:45:15.454227 1219664 kubeadm.go:310] [preflight] Running pre-flight checks
	I0407 13:45:15.454313 1219664 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0407 13:45:15.454427 1219664 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0407 13:45:15.454545 1219664 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0407 13:45:15.454630 1219664 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 13:45:15.456348 1219664 out.go:235]   - Generating certificates and keys ...
	I0407 13:45:15.456450 1219664 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0407 13:45:15.456528 1219664 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0407 13:45:15.456647 1219664 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0407 13:45:15.456789 1219664 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0407 13:45:15.456892 1219664 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0407 13:45:15.456964 1219664 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0407 13:45:15.457044 1219664 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0407 13:45:15.457250 1219664 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-896794] and IPs [192.168.61.209 127.0.0.1 ::1]
	I0407 13:45:15.457345 1219664 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0407 13:45:15.457518 1219664 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-896794] and IPs [192.168.61.209 127.0.0.1 ::1]
	I0407 13:45:15.457623 1219664 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0407 13:45:15.457737 1219664 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0407 13:45:15.457812 1219664 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0407 13:45:15.457886 1219664 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 13:45:15.457972 1219664 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 13:45:15.458046 1219664 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0407 13:45:15.458142 1219664 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 13:45:15.458263 1219664 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 13:45:15.458355 1219664 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 13:45:15.458475 1219664 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 13:45:15.458573 1219664 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 13:45:15.460709 1219664 out.go:235]   - Booting up control plane ...
	I0407 13:45:15.460855 1219664 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 13:45:15.460972 1219664 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 13:45:15.461072 1219664 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 13:45:15.461222 1219664 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 13:45:15.461358 1219664 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 13:45:15.461419 1219664 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0407 13:45:15.461575 1219664 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0407 13:45:15.461689 1219664 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0407 13:45:15.461795 1219664 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.117648ms
	I0407 13:45:15.461864 1219664 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0407 13:45:15.461914 1219664 kubeadm.go:310] [api-check] The API server is healthy after 5.501519105s
	I0407 13:45:15.462047 1219664 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0407 13:45:15.462203 1219664 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0407 13:45:15.462271 1219664 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0407 13:45:15.462540 1219664 kubeadm.go:310] [mark-control-plane] Marking the node newest-cni-896794 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0407 13:45:15.462623 1219664 kubeadm.go:310] [bootstrap-token] Using token: wbeib1.oiz0q06qdsvsjw8u
	I0407 13:45:15.465058 1219664 out.go:235]   - Configuring RBAC rules ...
	I0407 13:45:15.465221 1219664 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0407 13:45:15.465344 1219664 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0407 13:45:15.465530 1219664 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0407 13:45:15.465671 1219664 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0407 13:45:15.465813 1219664 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0407 13:45:15.465936 1219664 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0407 13:45:15.466119 1219664 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0407 13:45:15.466201 1219664 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0407 13:45:15.466273 1219664 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0407 13:45:15.466282 1219664 kubeadm.go:310] 
	I0407 13:45:15.466356 1219664 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0407 13:45:15.466366 1219664 kubeadm.go:310] 
	I0407 13:45:15.466478 1219664 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0407 13:45:15.466490 1219664 kubeadm.go:310] 
	I0407 13:45:15.466528 1219664 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0407 13:45:15.466607 1219664 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0407 13:45:15.466684 1219664 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0407 13:45:15.466698 1219664 kubeadm.go:310] 
	I0407 13:45:15.466781 1219664 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0407 13:45:15.466791 1219664 kubeadm.go:310] 
	I0407 13:45:15.466852 1219664 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0407 13:45:15.466861 1219664 kubeadm.go:310] 
	I0407 13:45:15.466941 1219664 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0407 13:45:15.467046 1219664 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0407 13:45:15.467153 1219664 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0407 13:45:15.467175 1219664 kubeadm.go:310] 
	I0407 13:45:15.467308 1219664 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0407 13:45:15.467420 1219664 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0407 13:45:15.467430 1219664 kubeadm.go:310] 
	I0407 13:45:15.467542 1219664 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wbeib1.oiz0q06qdsvsjw8u \
	I0407 13:45:15.467683 1219664 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:001387253bb6e222db2af12e9fcbe5a1c3ee2a6f53970e58b5a7d017a3fc6618 \
	I0407 13:45:15.467719 1219664 kubeadm.go:310] 	--control-plane 
	I0407 13:45:15.467728 1219664 kubeadm.go:310] 
	I0407 13:45:15.467855 1219664 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0407 13:45:15.467873 1219664 kubeadm.go:310] 
	I0407 13:45:15.467956 1219664 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wbeib1.oiz0q06qdsvsjw8u \
	I0407 13:45:15.468067 1219664 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:001387253bb6e222db2af12e9fcbe5a1c3ee2a6f53970e58b5a7d017a3fc6618 
	I0407 13:45:15.468083 1219664 cni.go:84] Creating CNI manager for ""
	I0407 13:45:15.468091 1219664 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 13:45:15.469939 1219664 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0407 13:45:10.659932 1219904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0407 13:45:10.668636 1219904 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-973925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kuberne
tes-upgrade-973925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:45:10.668773 1219904 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0407 13:45:10.668841 1219904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0407 13:45:10.733121 1219904 cri.go:89] found id: "d9b2f2808472a9705905ffa3772266f839df4f34824848ecc63cada436d6e514"
	I0407 13:45:10.733162 1219904 cri.go:89] found id: "b1d9cee1f4d86988c7a6faed8dfbf0446675f294dc84dd7cc188679559cd45e7"
	I0407 13:45:10.733170 1219904 cri.go:89] found id: "cd109206ca2e5058d7e1226fbd60f240bbd4b1fd61a65eb31560ec84f577d016"
	I0407 13:45:10.733175 1219904 cri.go:89] found id: "37b7ab13b939fb19e33b772e906c1629d4d441b1138838e8b3f905b91f1196a9"
	I0407 13:45:10.733179 1219904 cri.go:89] found id: "bec94b6db7651baebda958369f7106eb9fdd96c0eb6e1ea357a930ed520e2f95"
	I0407 13:45:10.733184 1219904 cri.go:89] found id: "aef63fefcc4cd4b936d9867ede2dbe10479da3cc4f3c1a8dd36ee1f22e8c8bd4"
	I0407 13:45:10.733189 1219904 cri.go:89] found id: "751f0d4bfaa5dc35c87a871a52a8011a3e1e72ca49b54dfdda346e3dbea61b7c"
	I0407 13:45:10.733193 1219904 cri.go:89] found id: "a709f83a5b89731b0a7a06b424bf955afd774bf9f4116a5ebf27f470a05f33fa"
	I0407 13:45:10.733197 1219904 cri.go:89] found id: "86d01bbb21376b91599f5a5dc0d27577bbc6994672439f0d0d2269a0fe76db51"
	I0407 13:45:10.733207 1219904 cri.go:89] found id: "57060c8ef1e64381d5ded7c05f74e15dd0d19f9a4f7e151d1466d4feb5b8bba3"
	I0407 13:45:10.733210 1219904 cri.go:89] found id: "4a76d9d1c48964bc4eb2a0d46141ef5b5044153d9d172e3dccdf99026cfafc0f"
	I0407 13:45:10.733212 1219904 cri.go:89] found id: ""
	I0407 13:45:10.733267 1219904 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-973925 -n kubernetes-upgrade-973925
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-973925 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-973925" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-973925
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-973925: (1.312390939s)
--- FAIL: TestKubernetesUpgrade (368.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (274.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-435730 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-435730 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m33.844296318s)

                                                
                                                
-- stdout --
	* [old-k8s-version-435730] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20602-1162386/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1162386/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-435730" primary control-plane node in "old-k8s-version-435730" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:29:03.000240 1206720 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:29:03.000693 1206720 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:29:03.000709 1206720 out.go:358] Setting ErrFile to fd 2...
	I0407 13:29:03.000717 1206720 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:29:03.001079 1206720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1162386/.minikube/bin
	I0407 13:29:03.001995 1206720 out.go:352] Setting JSON to false
	I0407 13:29:03.004004 1206720 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":18687,"bootTime":1744013856,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 13:29:03.004180 1206720 start.go:139] virtualization: kvm guest
	I0407 13:29:03.006852 1206720 out.go:177] * [old-k8s-version-435730] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 13:29:03.008561 1206720 notify.go:220] Checking for updates...
	I0407 13:29:03.008576 1206720 out.go:177]   - MINIKUBE_LOCATION=20602
	I0407 13:29:03.011785 1206720 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 13:29:03.013459 1206720 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20602-1162386/kubeconfig
	I0407 13:29:03.017109 1206720 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1162386/.minikube
	I0407 13:29:03.020936 1206720 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 13:29:03.023226 1206720 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 13:29:03.026590 1206720 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 13:29:03.081788 1206720 out.go:177] * Using the kvm2 driver based on user configuration
	I0407 13:29:03.083403 1206720 start.go:297] selected driver: kvm2
	I0407 13:29:03.083433 1206720 start.go:901] validating driver "kvm2" against <nil>
	I0407 13:29:03.083446 1206720 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 13:29:03.084844 1206720 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 13:29:03.085009 1206720 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20602-1162386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0407 13:29:03.106707 1206720 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0407 13:29:03.106829 1206720 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 13:29:03.107162 1206720 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 13:29:03.107206 1206720 cni.go:84] Creating CNI manager for ""
	I0407 13:29:03.107246 1206720 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 13:29:03.107264 1206720 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0407 13:29:03.107329 1206720 start.go:340] cluster config:
	{Name:old-k8s-version-435730 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-435730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:29:03.107474 1206720 iso.go:125] acquiring lock: {Name:mk51e1827709f7a3810dbd898083f8185ece65eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 13:29:03.110022 1206720 out.go:177] * Starting "old-k8s-version-435730" primary control-plane node in "old-k8s-version-435730" cluster
	I0407 13:29:03.112339 1206720 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0407 13:29:03.112419 1206720 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0407 13:29:03.112459 1206720 cache.go:56] Caching tarball of preloaded images
	I0407 13:29:03.112614 1206720 preload.go:172] Found /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0407 13:29:03.112631 1206720 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0407 13:29:03.112960 1206720 profile.go:143] Saving config to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/config.json ...
	I0407 13:29:03.112997 1206720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/config.json: {Name:mk8f1b272eb2efc3b9c7974cff46ed8da7a4eaaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:29:03.113187 1206720 start.go:360] acquireMachinesLock for old-k8s-version-435730: {Name:mk51d4c744fa92d56cf6ac11b1e792c85ef6709a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 13:29:03.113260 1206720 start.go:364] duration metric: took 49.415µs to acquireMachinesLock for "old-k8s-version-435730"
	I0407 13:29:03.113289 1206720 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-435730 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 C
lusterName:old-k8s-version-435730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0407 13:29:03.113374 1206720 start.go:125] createHost starting for "" (driver="kvm2")
	I0407 13:29:03.115405 1206720 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0407 13:29:03.115701 1206720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:29:03.115772 1206720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:29:03.133400 1206720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34605
	I0407 13:29:03.134025 1206720 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:29:03.134601 1206720 main.go:141] libmachine: Using API Version  1
	I0407 13:29:03.134626 1206720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:29:03.135028 1206720 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:29:03.135225 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetMachineName
	I0407 13:29:03.135412 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .DriverName
	I0407 13:29:03.135562 1206720 start.go:159] libmachine.API.Create for "old-k8s-version-435730" (driver="kvm2")
	I0407 13:29:03.135592 1206720 client.go:168] LocalClient.Create starting
	I0407 13:29:03.135621 1206720 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem
	I0407 13:29:03.135657 1206720 main.go:141] libmachine: Decoding PEM data...
	I0407 13:29:03.135671 1206720 main.go:141] libmachine: Parsing certificate...
	I0407 13:29:03.135734 1206720 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem
	I0407 13:29:03.135755 1206720 main.go:141] libmachine: Decoding PEM data...
	I0407 13:29:03.135765 1206720 main.go:141] libmachine: Parsing certificate...
	I0407 13:29:03.135782 1206720 main.go:141] libmachine: Running pre-create checks...
	I0407 13:29:03.135792 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .PreCreateCheck
	I0407 13:29:03.136104 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetConfigRaw
	I0407 13:29:03.136563 1206720 main.go:141] libmachine: Creating machine...
	I0407 13:29:03.136579 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .Create
	I0407 13:29:03.136736 1206720 main.go:141] libmachine: (old-k8s-version-435730) creating KVM machine...
	I0407 13:29:03.136751 1206720 main.go:141] libmachine: (old-k8s-version-435730) creating network...
	I0407 13:29:03.138997 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | found existing default KVM network
	I0407 13:29:03.140237 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | I0407 13:29:03.139941 1206777 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00020eef0}
	I0407 13:29:03.140444 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | created network xml: 
	I0407 13:29:03.140477 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | <network>
	I0407 13:29:03.140492 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG |   <name>mk-old-k8s-version-435730</name>
	I0407 13:29:03.140501 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG |   <dns enable='no'/>
	I0407 13:29:03.140518 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG |   
	I0407 13:29:03.140530 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0407 13:29:03.140542 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG |     <dhcp>
	I0407 13:29:03.140550 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0407 13:29:03.140602 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG |     </dhcp>
	I0407 13:29:03.140638 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG |   </ip>
	I0407 13:29:03.140648 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG |   
	I0407 13:29:03.140655 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | </network>
	I0407 13:29:03.140686 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | 
	I0407 13:29:03.147126 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | trying to create private KVM network mk-old-k8s-version-435730 192.168.39.0/24...
	I0407 13:29:03.269865 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | private KVM network mk-old-k8s-version-435730 192.168.39.0/24 created
	I0407 13:29:03.269897 1206720 main.go:141] libmachine: (old-k8s-version-435730) setting up store path in /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/old-k8s-version-435730 ...
	I0407 13:29:03.269912 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | I0407 13:29:03.263475 1206777 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20602-1162386/.minikube
	I0407 13:29:03.269933 1206720 main.go:141] libmachine: (old-k8s-version-435730) building disk image from file:///home/jenkins/minikube-integration/20602-1162386/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0407 13:29:03.269954 1206720 main.go:141] libmachine: (old-k8s-version-435730) Downloading /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20602-1162386/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0407 13:29:03.638730 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | I0407 13:29:03.638518 1206777 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/old-k8s-version-435730/id_rsa...
	I0407 13:29:03.724878 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | I0407 13:29:03.724608 1206777 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/old-k8s-version-435730/old-k8s-version-435730.rawdisk...
	I0407 13:29:03.724944 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | Writing magic tar header
	I0407 13:29:03.724959 1206720 main.go:141] libmachine: (old-k8s-version-435730) setting executable bit set on /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/old-k8s-version-435730 (perms=drwx------)
	I0407 13:29:03.724977 1206720 main.go:141] libmachine: (old-k8s-version-435730) setting executable bit set on /home/jenkins/minikube-integration/20602-1162386/.minikube/machines (perms=drwxr-xr-x)
	I0407 13:29:03.724984 1206720 main.go:141] libmachine: (old-k8s-version-435730) setting executable bit set on /home/jenkins/minikube-integration/20602-1162386/.minikube (perms=drwxr-xr-x)
	I0407 13:29:03.724992 1206720 main.go:141] libmachine: (old-k8s-version-435730) setting executable bit set on /home/jenkins/minikube-integration/20602-1162386 (perms=drwxrwxr-x)
	I0407 13:29:03.724999 1206720 main.go:141] libmachine: (old-k8s-version-435730) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0407 13:29:03.725006 1206720 main.go:141] libmachine: (old-k8s-version-435730) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0407 13:29:03.725015 1206720 main.go:141] libmachine: (old-k8s-version-435730) creating domain...
	I0407 13:29:03.725788 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | Writing SSH key tar header
	I0407 13:29:03.725876 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | I0407 13:29:03.724749 1206777 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/old-k8s-version-435730 ...
	I0407 13:29:03.725908 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/old-k8s-version-435730
	I0407 13:29:03.725930 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines
	I0407 13:29:03.725941 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20602-1162386/.minikube
	I0407 13:29:03.725951 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20602-1162386
	I0407 13:29:03.725968 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0407 13:29:03.725977 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | checking permissions on dir: /home/jenkins
	I0407 13:29:03.725991 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | checking permissions on dir: /home
	I0407 13:29:03.726000 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | skipping /home - not owner
	I0407 13:29:03.727191 1206720 main.go:141] libmachine: (old-k8s-version-435730) define libvirt domain using xml: 
	I0407 13:29:03.727257 1206720 main.go:141] libmachine: (old-k8s-version-435730) <domain type='kvm'>
	I0407 13:29:03.727458 1206720 main.go:141] libmachine: (old-k8s-version-435730)   <name>old-k8s-version-435730</name>
	I0407 13:29:03.727513 1206720 main.go:141] libmachine: (old-k8s-version-435730)   <memory unit='MiB'>2200</memory>
	I0407 13:29:03.727536 1206720 main.go:141] libmachine: (old-k8s-version-435730)   <vcpu>2</vcpu>
	I0407 13:29:03.727545 1206720 main.go:141] libmachine: (old-k8s-version-435730)   <features>
	I0407 13:29:03.727557 1206720 main.go:141] libmachine: (old-k8s-version-435730)     <acpi/>
	I0407 13:29:03.727648 1206720 main.go:141] libmachine: (old-k8s-version-435730)     <apic/>
	I0407 13:29:03.727665 1206720 main.go:141] libmachine: (old-k8s-version-435730)     <pae/>
	I0407 13:29:03.727936 1206720 main.go:141] libmachine: (old-k8s-version-435730)     
	I0407 13:29:03.728054 1206720 main.go:141] libmachine: (old-k8s-version-435730)   </features>
	I0407 13:29:03.728091 1206720 main.go:141] libmachine: (old-k8s-version-435730)   <cpu mode='host-passthrough'>
	I0407 13:29:03.728146 1206720 main.go:141] libmachine: (old-k8s-version-435730)   
	I0407 13:29:03.728168 1206720 main.go:141] libmachine: (old-k8s-version-435730)   </cpu>
	I0407 13:29:03.728177 1206720 main.go:141] libmachine: (old-k8s-version-435730)   <os>
	I0407 13:29:03.728189 1206720 main.go:141] libmachine: (old-k8s-version-435730)     <type>hvm</type>
	I0407 13:29:03.728213 1206720 main.go:141] libmachine: (old-k8s-version-435730)     <boot dev='cdrom'/>
	I0407 13:29:03.728217 1206720 main.go:141] libmachine: (old-k8s-version-435730)     <boot dev='hd'/>
	I0407 13:29:03.728227 1206720 main.go:141] libmachine: (old-k8s-version-435730)     <bootmenu enable='no'/>
	I0407 13:29:03.728231 1206720 main.go:141] libmachine: (old-k8s-version-435730)   </os>
	I0407 13:29:03.728236 1206720 main.go:141] libmachine: (old-k8s-version-435730)   <devices>
	I0407 13:29:03.728241 1206720 main.go:141] libmachine: (old-k8s-version-435730)     <disk type='file' device='cdrom'>
	I0407 13:29:03.728254 1206720 main.go:141] libmachine: (old-k8s-version-435730)       <source file='/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/old-k8s-version-435730/boot2docker.iso'/>
	I0407 13:29:03.728270 1206720 main.go:141] libmachine: (old-k8s-version-435730)       <target dev='hdc' bus='scsi'/>
	I0407 13:29:03.728387 1206720 main.go:141] libmachine: (old-k8s-version-435730)       <readonly/>
	I0407 13:29:03.728436 1206720 main.go:141] libmachine: (old-k8s-version-435730)     </disk>
	I0407 13:29:03.728453 1206720 main.go:141] libmachine: (old-k8s-version-435730)     <disk type='file' device='disk'>
	I0407 13:29:03.728477 1206720 main.go:141] libmachine: (old-k8s-version-435730)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0407 13:29:03.728503 1206720 main.go:141] libmachine: (old-k8s-version-435730)       <source file='/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/old-k8s-version-435730/old-k8s-version-435730.rawdisk'/>
	I0407 13:29:03.728519 1206720 main.go:141] libmachine: (old-k8s-version-435730)       <target dev='hda' bus='virtio'/>
	I0407 13:29:03.728536 1206720 main.go:141] libmachine: (old-k8s-version-435730)     </disk>
	I0407 13:29:03.728555 1206720 main.go:141] libmachine: (old-k8s-version-435730)     <interface type='network'>
	I0407 13:29:03.728574 1206720 main.go:141] libmachine: (old-k8s-version-435730)       <source network='mk-old-k8s-version-435730'/>
	I0407 13:29:03.728586 1206720 main.go:141] libmachine: (old-k8s-version-435730)       <model type='virtio'/>
	I0407 13:29:03.728596 1206720 main.go:141] libmachine: (old-k8s-version-435730)     </interface>
	I0407 13:29:03.728610 1206720 main.go:141] libmachine: (old-k8s-version-435730)     <interface type='network'>
	I0407 13:29:03.728621 1206720 main.go:141] libmachine: (old-k8s-version-435730)       <source network='default'/>
	I0407 13:29:03.728641 1206720 main.go:141] libmachine: (old-k8s-version-435730)       <model type='virtio'/>
	I0407 13:29:03.728655 1206720 main.go:141] libmachine: (old-k8s-version-435730)     </interface>
	I0407 13:29:03.728666 1206720 main.go:141] libmachine: (old-k8s-version-435730)     <serial type='pty'>
	I0407 13:29:03.728676 1206720 main.go:141] libmachine: (old-k8s-version-435730)       <target port='0'/>
	I0407 13:29:03.728687 1206720 main.go:141] libmachine: (old-k8s-version-435730)     </serial>
	I0407 13:29:03.728697 1206720 main.go:141] libmachine: (old-k8s-version-435730)     <console type='pty'>
	I0407 13:29:03.728716 1206720 main.go:141] libmachine: (old-k8s-version-435730)       <target type='serial' port='0'/>
	I0407 13:29:03.728728 1206720 main.go:141] libmachine: (old-k8s-version-435730)     </console>
	I0407 13:29:03.728740 1206720 main.go:141] libmachine: (old-k8s-version-435730)     <rng model='virtio'>
	I0407 13:29:03.728751 1206720 main.go:141] libmachine: (old-k8s-version-435730)       <backend model='random'>/dev/random</backend>
	I0407 13:29:03.728763 1206720 main.go:141] libmachine: (old-k8s-version-435730)     </rng>
	I0407 13:29:03.728773 1206720 main.go:141] libmachine: (old-k8s-version-435730)     
	I0407 13:29:03.728783 1206720 main.go:141] libmachine: (old-k8s-version-435730)     
	I0407 13:29:03.728833 1206720 main.go:141] libmachine: (old-k8s-version-435730)   </devices>
	I0407 13:29:03.728871 1206720 main.go:141] libmachine: (old-k8s-version-435730) </domain>
	I0407 13:29:03.728887 1206720 main.go:141] libmachine: (old-k8s-version-435730) 
	I0407 13:29:03.734142 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:f3:bd:52 in network default
	I0407 13:29:03.735231 1206720 main.go:141] libmachine: (old-k8s-version-435730) starting domain...
	I0407 13:29:03.735262 1206720 main.go:141] libmachine: (old-k8s-version-435730) ensuring networks are active...
	I0407 13:29:03.735281 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:03.736456 1206720 main.go:141] libmachine: (old-k8s-version-435730) Ensuring network default is active
	I0407 13:29:03.736888 1206720 main.go:141] libmachine: (old-k8s-version-435730) Ensuring network mk-old-k8s-version-435730 is active
	I0407 13:29:03.737764 1206720 main.go:141] libmachine: (old-k8s-version-435730) getting domain XML...
	I0407 13:29:03.738779 1206720 main.go:141] libmachine: (old-k8s-version-435730) creating domain...
	I0407 13:29:05.425029 1206720 main.go:141] libmachine: (old-k8s-version-435730) waiting for IP...
	I0407 13:29:05.426008 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:05.426473 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | unable to find current IP address of domain old-k8s-version-435730 in network mk-old-k8s-version-435730
	I0407 13:29:05.426543 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | I0407 13:29:05.426485 1206777 retry.go:31] will retry after 240.732984ms: waiting for domain to come up
	I0407 13:29:05.669344 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:05.669974 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | unable to find current IP address of domain old-k8s-version-435730 in network mk-old-k8s-version-435730
	I0407 13:29:05.670002 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | I0407 13:29:05.669902 1206777 retry.go:31] will retry after 300.446039ms: waiting for domain to come up
	I0407 13:29:05.972845 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:05.973612 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | unable to find current IP address of domain old-k8s-version-435730 in network mk-old-k8s-version-435730
	I0407 13:29:05.973701 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | I0407 13:29:05.973582 1206777 retry.go:31] will retry after 307.704268ms: waiting for domain to come up
	I0407 13:29:06.283301 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:06.283883 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | unable to find current IP address of domain old-k8s-version-435730 in network mk-old-k8s-version-435730
	I0407 13:29:06.283949 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | I0407 13:29:06.283869 1206777 retry.go:31] will retry after 579.147006ms: waiting for domain to come up
	I0407 13:29:06.865451 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:06.866533 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | unable to find current IP address of domain old-k8s-version-435730 in network mk-old-k8s-version-435730
	I0407 13:29:06.866572 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | I0407 13:29:06.866437 1206777 retry.go:31] will retry after 540.673117ms: waiting for domain to come up
	I0407 13:29:07.409806 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:07.410334 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | unable to find current IP address of domain old-k8s-version-435730 in network mk-old-k8s-version-435730
	I0407 13:29:07.410369 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | I0407 13:29:07.410269 1206777 retry.go:31] will retry after 919.802789ms: waiting for domain to come up
	I0407 13:29:08.332761 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:08.333241 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | unable to find current IP address of domain old-k8s-version-435730 in network mk-old-k8s-version-435730
	I0407 13:29:08.333262 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | I0407 13:29:08.333215 1206777 retry.go:31] will retry after 1.00646546s: waiting for domain to come up
	I0407 13:29:09.341019 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:09.341566 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | unable to find current IP address of domain old-k8s-version-435730 in network mk-old-k8s-version-435730
	I0407 13:29:09.341603 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | I0407 13:29:09.341523 1206777 retry.go:31] will retry after 1.396900093s: waiting for domain to come up
	I0407 13:29:10.740150 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:10.740907 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | unable to find current IP address of domain old-k8s-version-435730 in network mk-old-k8s-version-435730
	I0407 13:29:10.740970 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | I0407 13:29:10.740779 1206777 retry.go:31] will retry after 1.636977425s: waiting for domain to come up
	I0407 13:29:12.379465 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:12.380402 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | unable to find current IP address of domain old-k8s-version-435730 in network mk-old-k8s-version-435730
	I0407 13:29:12.380434 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | I0407 13:29:12.380334 1206777 retry.go:31] will retry after 2.247187975s: waiting for domain to come up
	I0407 13:29:14.630031 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:14.630740 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | unable to find current IP address of domain old-k8s-version-435730 in network mk-old-k8s-version-435730
	I0407 13:29:14.630767 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | I0407 13:29:14.630698 1206777 retry.go:31] will retry after 1.76677182s: waiting for domain to come up
	I0407 13:29:16.398950 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:16.399526 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | unable to find current IP address of domain old-k8s-version-435730 in network mk-old-k8s-version-435730
	I0407 13:29:16.399557 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | I0407 13:29:16.399466 1206777 retry.go:31] will retry after 2.279920734s: waiting for domain to come up
	I0407 13:29:18.680932 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:18.681573 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | unable to find current IP address of domain old-k8s-version-435730 in network mk-old-k8s-version-435730
	I0407 13:29:18.681607 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | I0407 13:29:18.681486 1206777 retry.go:31] will retry after 3.99702619s: waiting for domain to come up
	I0407 13:29:22.680586 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:22.681295 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | unable to find current IP address of domain old-k8s-version-435730 in network mk-old-k8s-version-435730
	I0407 13:29:22.681367 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | I0407 13:29:22.681254 1206777 retry.go:31] will retry after 3.724879233s: waiting for domain to come up
	I0407 13:29:26.410195 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:26.410696 1206720 main.go:141] libmachine: (old-k8s-version-435730) found domain IP: 192.168.39.211
	I0407 13:29:26.410723 1206720 main.go:141] libmachine: (old-k8s-version-435730) reserving static IP address...
	I0407 13:29:26.410738 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has current primary IP address 192.168.39.211 and MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:26.411376 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-435730", mac: "52:54:00:e3:da:b2", ip: "192.168.39.211"} in network mk-old-k8s-version-435730
	I0407 13:29:26.518938 1206720 main.go:141] libmachine: (old-k8s-version-435730) reserved static IP address 192.168.39.211 for domain old-k8s-version-435730
	I0407 13:29:26.518970 1206720 main.go:141] libmachine: (old-k8s-version-435730) waiting for SSH...
	I0407 13:29:26.518980 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | Getting to WaitForSSH function...
	I0407 13:29:26.522517 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:26.523039 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:da:b2", ip: ""} in network mk-old-k8s-version-435730: {Iface:virbr1 ExpiryTime:2025-04-07 14:29:20 +0000 UTC Type:0 Mac:52:54:00:e3:da:b2 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e3:da:b2}
	I0407 13:29:26.523081 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined IP address 192.168.39.211 and MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:26.523288 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | Using SSH client type: external
	I0407 13:29:26.523319 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | Using SSH private key: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/old-k8s-version-435730/id_rsa (-rw-------)
	I0407 13:29:26.523359 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/old-k8s-version-435730/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0407 13:29:26.523370 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | About to run SSH command:
	I0407 13:29:26.523403 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | exit 0
	I0407 13:29:26.650715 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | SSH cmd err, output: <nil>: 
	I0407 13:29:26.650936 1206720 main.go:141] libmachine: (old-k8s-version-435730) KVM machine creation complete
	I0407 13:29:26.651488 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetConfigRaw
	I0407 13:29:26.652237 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .DriverName
	I0407 13:29:26.652538 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .DriverName
	I0407 13:29:26.652794 1206720 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0407 13:29:26.652812 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetState
	I0407 13:29:26.655076 1206720 main.go:141] libmachine: Detecting operating system of created instance...
	I0407 13:29:26.655101 1206720 main.go:141] libmachine: Waiting for SSH to be available...
	I0407 13:29:26.655121 1206720 main.go:141] libmachine: Getting to WaitForSSH function...
	I0407 13:29:26.655131 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHHostname
	I0407 13:29:26.658257 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:26.658753 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:da:b2", ip: ""} in network mk-old-k8s-version-435730: {Iface:virbr1 ExpiryTime:2025-04-07 14:29:20 +0000 UTC Type:0 Mac:52:54:00:e3:da:b2 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-435730 Clientid:01:52:54:00:e3:da:b2}
	I0407 13:29:26.658784 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined IP address 192.168.39.211 and MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:26.658956 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHPort
	I0407 13:29:26.659204 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHKeyPath
	I0407 13:29:26.659389 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHKeyPath
	I0407 13:29:26.659518 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHUsername
	I0407 13:29:26.659683 1206720 main.go:141] libmachine: Using SSH client type: native
	I0407 13:29:26.659978 1206720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0407 13:29:26.659992 1206720 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0407 13:29:26.765680 1206720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 13:29:26.765737 1206720 main.go:141] libmachine: Detecting the provisioner...
	I0407 13:29:26.765749 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHHostname
	I0407 13:29:26.769225 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:26.769653 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:da:b2", ip: ""} in network mk-old-k8s-version-435730: {Iface:virbr1 ExpiryTime:2025-04-07 14:29:20 +0000 UTC Type:0 Mac:52:54:00:e3:da:b2 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-435730 Clientid:01:52:54:00:e3:da:b2}
	I0407 13:29:26.769690 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined IP address 192.168.39.211 and MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:26.769965 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHPort
	I0407 13:29:26.770273 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHKeyPath
	I0407 13:29:26.770479 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHKeyPath
	I0407 13:29:26.770709 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHUsername
	I0407 13:29:26.770910 1206720 main.go:141] libmachine: Using SSH client type: native
	I0407 13:29:26.771175 1206720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0407 13:29:26.771188 1206720 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0407 13:29:26.878891 1206720 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0407 13:29:26.878980 1206720 main.go:141] libmachine: found compatible host: buildroot
	I0407 13:29:26.878990 1206720 main.go:141] libmachine: Provisioning with buildroot...
	I0407 13:29:26.878999 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetMachineName
	I0407 13:29:26.879358 1206720 buildroot.go:166] provisioning hostname "old-k8s-version-435730"
	I0407 13:29:26.879393 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetMachineName
	I0407 13:29:26.879621 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHHostname
	I0407 13:29:26.882571 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:26.883103 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:da:b2", ip: ""} in network mk-old-k8s-version-435730: {Iface:virbr1 ExpiryTime:2025-04-07 14:29:20 +0000 UTC Type:0 Mac:52:54:00:e3:da:b2 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-435730 Clientid:01:52:54:00:e3:da:b2}
	I0407 13:29:26.883128 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined IP address 192.168.39.211 and MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:26.883387 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHPort
	I0407 13:29:26.883601 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHKeyPath
	I0407 13:29:26.883749 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHKeyPath
	I0407 13:29:26.883877 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHUsername
	I0407 13:29:26.884109 1206720 main.go:141] libmachine: Using SSH client type: native
	I0407 13:29:26.884317 1206720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0407 13:29:26.884331 1206720 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-435730 && echo "old-k8s-version-435730" | sudo tee /etc/hostname
	I0407 13:29:27.006816 1206720 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-435730
	
	I0407 13:29:27.006854 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHHostname
	I0407 13:29:27.010015 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:27.010432 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:da:b2", ip: ""} in network mk-old-k8s-version-435730: {Iface:virbr1 ExpiryTime:2025-04-07 14:29:20 +0000 UTC Type:0 Mac:52:54:00:e3:da:b2 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-435730 Clientid:01:52:54:00:e3:da:b2}
	I0407 13:29:27.010465 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined IP address 192.168.39.211 and MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:27.010673 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHPort
	I0407 13:29:27.010902 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHKeyPath
	I0407 13:29:27.011088 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHKeyPath
	I0407 13:29:27.011251 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHUsername
	I0407 13:29:27.011440 1206720 main.go:141] libmachine: Using SSH client type: native
	I0407 13:29:27.011649 1206720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0407 13:29:27.011665 1206720 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-435730' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-435730/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-435730' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 13:29:27.128015 1206720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 13:29:27.128055 1206720 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20602-1162386/.minikube CaCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20602-1162386/.minikube}
	I0407 13:29:27.128123 1206720 buildroot.go:174] setting up certificates
	I0407 13:29:27.128141 1206720 provision.go:84] configureAuth start
	I0407 13:29:27.128158 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetMachineName
	I0407 13:29:27.128471 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetIP
	I0407 13:29:27.131909 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:27.132346 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:da:b2", ip: ""} in network mk-old-k8s-version-435730: {Iface:virbr1 ExpiryTime:2025-04-07 14:29:20 +0000 UTC Type:0 Mac:52:54:00:e3:da:b2 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-435730 Clientid:01:52:54:00:e3:da:b2}
	I0407 13:29:27.132371 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined IP address 192.168.39.211 and MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:27.132630 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHHostname
	I0407 13:29:27.135545 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:27.135976 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:da:b2", ip: ""} in network mk-old-k8s-version-435730: {Iface:virbr1 ExpiryTime:2025-04-07 14:29:20 +0000 UTC Type:0 Mac:52:54:00:e3:da:b2 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-435730 Clientid:01:52:54:00:e3:da:b2}
	I0407 13:29:27.136032 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined IP address 192.168.39.211 and MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:27.136350 1206720 provision.go:143] copyHostCerts
	I0407 13:29:27.136434 1206720 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem, removing ...
	I0407 13:29:27.136455 1206720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem
	I0407 13:29:27.136522 1206720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem (1078 bytes)
	I0407 13:29:27.136609 1206720 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem, removing ...
	I0407 13:29:27.136619 1206720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem
	I0407 13:29:27.136641 1206720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem (1123 bytes)
	I0407 13:29:27.136690 1206720 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem, removing ...
	I0407 13:29:27.136697 1206720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem
	I0407 13:29:27.136718 1206720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem (1675 bytes)
	I0407 13:29:27.136790 1206720 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-435730 san=[127.0.0.1 192.168.39.211 localhost minikube old-k8s-version-435730]
	I0407 13:29:27.669690 1206720 provision.go:177] copyRemoteCerts
	I0407 13:29:27.669779 1206720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 13:29:27.669817 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHHostname
	I0407 13:29:27.673258 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:27.673664 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:da:b2", ip: ""} in network mk-old-k8s-version-435730: {Iface:virbr1 ExpiryTime:2025-04-07 14:29:20 +0000 UTC Type:0 Mac:52:54:00:e3:da:b2 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-435730 Clientid:01:52:54:00:e3:da:b2}
	I0407 13:29:27.673698 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined IP address 192.168.39.211 and MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:27.673845 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHPort
	I0407 13:29:27.674108 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHKeyPath
	I0407 13:29:27.674292 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHUsername
	I0407 13:29:27.674465 1206720 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/old-k8s-version-435730/id_rsa Username:docker}
	I0407 13:29:27.761341 1206720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 13:29:27.792470 1206720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0407 13:29:27.821594 1206720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0407 13:29:27.851663 1206720 provision.go:87] duration metric: took 723.502166ms to configureAuth
	I0407 13:29:27.851706 1206720 buildroot.go:189] setting minikube options for container-runtime
	I0407 13:29:27.851913 1206720 config.go:182] Loaded profile config "old-k8s-version-435730": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0407 13:29:27.852035 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHHostname
	I0407 13:29:27.855733 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:27.856349 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:da:b2", ip: ""} in network mk-old-k8s-version-435730: {Iface:virbr1 ExpiryTime:2025-04-07 14:29:20 +0000 UTC Type:0 Mac:52:54:00:e3:da:b2 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-435730 Clientid:01:52:54:00:e3:da:b2}
	I0407 13:29:27.856377 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined IP address 192.168.39.211 and MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:27.856604 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHPort
	I0407 13:29:27.856861 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHKeyPath
	I0407 13:29:27.857014 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHKeyPath
	I0407 13:29:27.857156 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHUsername
	I0407 13:29:27.857345 1206720 main.go:141] libmachine: Using SSH client type: native
	I0407 13:29:27.857581 1206720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0407 13:29:27.857600 1206720 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0407 13:29:28.103926 1206720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0407 13:29:28.103957 1206720 main.go:141] libmachine: Checking connection to Docker...
	I0407 13:29:28.103970 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetURL
	I0407 13:29:28.105591 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | using libvirt version 6000000
	I0407 13:29:28.109746 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:28.110525 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:da:b2", ip: ""} in network mk-old-k8s-version-435730: {Iface:virbr1 ExpiryTime:2025-04-07 14:29:20 +0000 UTC Type:0 Mac:52:54:00:e3:da:b2 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-435730 Clientid:01:52:54:00:e3:da:b2}
	I0407 13:29:28.110573 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined IP address 192.168.39.211 and MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:28.110897 1206720 main.go:141] libmachine: Docker is up and running!
	I0407 13:29:28.110921 1206720 main.go:141] libmachine: Reticulating splines...
	I0407 13:29:28.110931 1206720 client.go:171] duration metric: took 24.97532972s to LocalClient.Create
	I0407 13:29:28.110964 1206720 start.go:167] duration metric: took 24.97540235s to libmachine.API.Create "old-k8s-version-435730"
	I0407 13:29:28.110974 1206720 start.go:293] postStartSetup for "old-k8s-version-435730" (driver="kvm2")
	I0407 13:29:28.110986 1206720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 13:29:28.111046 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .DriverName
	I0407 13:29:28.111444 1206720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 13:29:28.111485 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHHostname
	I0407 13:29:28.114671 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:28.115092 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:da:b2", ip: ""} in network mk-old-k8s-version-435730: {Iface:virbr1 ExpiryTime:2025-04-07 14:29:20 +0000 UTC Type:0 Mac:52:54:00:e3:da:b2 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-435730 Clientid:01:52:54:00:e3:da:b2}
	I0407 13:29:28.115127 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined IP address 192.168.39.211 and MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:28.115299 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHPort
	I0407 13:29:28.115590 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHKeyPath
	I0407 13:29:28.115819 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHUsername
	I0407 13:29:28.116009 1206720 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/old-k8s-version-435730/id_rsa Username:docker}
	I0407 13:29:28.200794 1206720 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 13:29:28.205856 1206720 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 13:29:28.205899 1206720 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-1162386/.minikube/addons for local assets ...
	I0407 13:29:28.205988 1206720 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-1162386/.minikube/files for local assets ...
	I0407 13:29:28.206122 1206720 filesync.go:149] local asset: /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem -> 11697162.pem in /etc/ssl/certs
	I0407 13:29:28.206286 1206720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 13:29:28.217630 1206720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem --> /etc/ssl/certs/11697162.pem (1708 bytes)
	I0407 13:29:28.245540 1206720 start.go:296] duration metric: took 134.546886ms for postStartSetup
	I0407 13:29:28.245617 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetConfigRaw
	I0407 13:29:28.246576 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetIP
	I0407 13:29:28.250438 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:28.250990 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:da:b2", ip: ""} in network mk-old-k8s-version-435730: {Iface:virbr1 ExpiryTime:2025-04-07 14:29:20 +0000 UTC Type:0 Mac:52:54:00:e3:da:b2 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-435730 Clientid:01:52:54:00:e3:da:b2}
	I0407 13:29:28.251005 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined IP address 192.168.39.211 and MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:28.251414 1206720 profile.go:143] Saving config to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/config.json ...
	I0407 13:29:28.251655 1206720 start.go:128] duration metric: took 25.138267404s to createHost
	I0407 13:29:28.251688 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHHostname
	I0407 13:29:28.255096 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:28.255468 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:da:b2", ip: ""} in network mk-old-k8s-version-435730: {Iface:virbr1 ExpiryTime:2025-04-07 14:29:20 +0000 UTC Type:0 Mac:52:54:00:e3:da:b2 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-435730 Clientid:01:52:54:00:e3:da:b2}
	I0407 13:29:28.255506 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined IP address 192.168.39.211 and MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:28.255704 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHPort
	I0407 13:29:28.255996 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHKeyPath
	I0407 13:29:28.256275 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHKeyPath
	I0407 13:29:28.256566 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHUsername
	I0407 13:29:28.256814 1206720 main.go:141] libmachine: Using SSH client type: native
	I0407 13:29:28.257053 1206720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0407 13:29:28.257068 1206720 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0407 13:29:28.367370 1206720 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744032568.335280597
	
	I0407 13:29:28.367398 1206720 fix.go:216] guest clock: 1744032568.335280597
	I0407 13:29:28.367406 1206720 fix.go:229] Guest: 2025-04-07 13:29:28.335280597 +0000 UTC Remote: 2025-04-07 13:29:28.251668506 +0000 UTC m=+25.308284461 (delta=83.612091ms)
	I0407 13:29:28.367452 1206720 fix.go:200] guest clock delta is within tolerance: 83.612091ms
	I0407 13:29:28.367458 1206720 start.go:83] releasing machines lock for "old-k8s-version-435730", held for 25.254184932s
	I0407 13:29:28.367488 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .DriverName
	I0407 13:29:28.367848 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetIP
	I0407 13:29:28.371613 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:28.372180 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:da:b2", ip: ""} in network mk-old-k8s-version-435730: {Iface:virbr1 ExpiryTime:2025-04-07 14:29:20 +0000 UTC Type:0 Mac:52:54:00:e3:da:b2 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-435730 Clientid:01:52:54:00:e3:da:b2}
	I0407 13:29:28.372218 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined IP address 192.168.39.211 and MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:28.372471 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .DriverName
	I0407 13:29:28.373324 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .DriverName
	I0407 13:29:28.373604 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .DriverName
	I0407 13:29:28.373735 1206720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0407 13:29:28.373794 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHHostname
	I0407 13:29:28.373927 1206720 ssh_runner.go:195] Run: cat /version.json
	I0407 13:29:28.373952 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHHostname
	I0407 13:29:28.378582 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:28.379956 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:da:b2", ip: ""} in network mk-old-k8s-version-435730: {Iface:virbr1 ExpiryTime:2025-04-07 14:29:20 +0000 UTC Type:0 Mac:52:54:00:e3:da:b2 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-435730 Clientid:01:52:54:00:e3:da:b2}
	I0407 13:29:28.380038 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined IP address 192.168.39.211 and MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:28.380071 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:28.380199 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHPort
	I0407 13:29:28.380523 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHKeyPath
	I0407 13:29:28.380848 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHUsername
	I0407 13:29:28.380864 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:da:b2", ip: ""} in network mk-old-k8s-version-435730: {Iface:virbr1 ExpiryTime:2025-04-07 14:29:20 +0000 UTC Type:0 Mac:52:54:00:e3:da:b2 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-435730 Clientid:01:52:54:00:e3:da:b2}
	I0407 13:29:28.380888 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined IP address 192.168.39.211 and MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:28.381126 1206720 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/old-k8s-version-435730/id_rsa Username:docker}
	I0407 13:29:28.381312 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHPort
	I0407 13:29:28.381803 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHKeyPath
	I0407 13:29:28.382130 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHUsername
	I0407 13:29:28.382455 1206720 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/old-k8s-version-435730/id_rsa Username:docker}
	I0407 13:29:28.490819 1206720 ssh_runner.go:195] Run: systemctl --version
	I0407 13:29:28.497732 1206720 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0407 13:29:28.678766 1206720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0407 13:29:28.687049 1206720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 13:29:28.687134 1206720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 13:29:28.706772 1206720 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0407 13:29:28.706802 1206720 start.go:495] detecting cgroup driver to use...
	I0407 13:29:28.706869 1206720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 13:29:28.724653 1206720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 13:29:28.741640 1206720 docker.go:217] disabling cri-docker service (if available) ...
	I0407 13:29:28.741734 1206720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0407 13:29:28.758064 1206720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0407 13:29:28.774056 1206720 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0407 13:29:28.910469 1206720 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0407 13:29:29.080892 1206720 docker.go:233] disabling docker service ...
	I0407 13:29:29.080988 1206720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0407 13:29:29.097231 1206720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0407 13:29:29.111574 1206720 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0407 13:29:29.247464 1206720 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0407 13:29:29.365355 1206720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0407 13:29:29.381585 1206720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 13:29:29.402855 1206720 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0407 13:29:29.402938 1206720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:29:29.415141 1206720 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0407 13:29:29.415242 1206720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:29:29.427470 1206720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:29:29.438411 1206720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:29:29.451530 1206720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 13:29:29.463469 1206720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 13:29:29.473950 1206720 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0407 13:29:29.474035 1206720 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0407 13:29:29.491064 1206720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 13:29:29.503193 1206720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:29:29.621593 1206720 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0407 13:29:29.728587 1206720 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0407 13:29:29.728757 1206720 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0407 13:29:29.734070 1206720 start.go:563] Will wait 60s for crictl version
	I0407 13:29:29.734142 1206720 ssh_runner.go:195] Run: which crictl
	I0407 13:29:29.738694 1206720 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 13:29:29.779804 1206720 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0407 13:29:29.779892 1206720 ssh_runner.go:195] Run: crio --version
	I0407 13:29:29.810551 1206720 ssh_runner.go:195] Run: crio --version
	I0407 13:29:29.847945 1206720 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0407 13:29:29.850474 1206720 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetIP
	I0407 13:29:29.854967 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:29.855481 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:da:b2", ip: ""} in network mk-old-k8s-version-435730: {Iface:virbr1 ExpiryTime:2025-04-07 14:29:20 +0000 UTC Type:0 Mac:52:54:00:e3:da:b2 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-435730 Clientid:01:52:54:00:e3:da:b2}
	I0407 13:29:29.855512 1206720 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined IP address 192.168.39.211 and MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:29:29.855839 1206720 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0407 13:29:29.861457 1206720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:29:29.877866 1206720 kubeadm.go:883] updating cluster {Name:old-k8s-version-435730 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-
version-435730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 13:29:29.878036 1206720 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0407 13:29:29.878200 1206720 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 13:29:29.926522 1206720 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0407 13:29:29.926611 1206720 ssh_runner.go:195] Run: which lz4
	I0407 13:29:29.931012 1206720 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0407 13:29:29.936235 1206720 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0407 13:29:29.936302 1206720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0407 13:29:31.840518 1206720 crio.go:462] duration metric: took 1.909545488s to copy over tarball
	I0407 13:29:31.840612 1206720 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0407 13:29:34.726791 1206720 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.886144371s)
	I0407 13:29:34.726829 1206720 crio.go:469] duration metric: took 2.886276173s to extract the tarball
	I0407 13:29:34.726841 1206720 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0407 13:29:34.771380 1206720 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 13:29:34.827791 1206720 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0407 13:29:34.827832 1206720 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0407 13:29:34.827889 1206720 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 13:29:34.827921 1206720 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0407 13:29:34.827953 1206720 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0407 13:29:34.827971 1206720 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 13:29:34.828024 1206720 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0407 13:29:34.828063 1206720 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0407 13:29:34.828076 1206720 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0407 13:29:34.828198 1206720 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0407 13:29:34.829615 1206720 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0407 13:29:34.829647 1206720 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0407 13:29:34.829684 1206720 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 13:29:34.829762 1206720 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 13:29:34.829778 1206720 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0407 13:29:34.829616 1206720 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0407 13:29:34.829808 1206720 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0407 13:29:34.829930 1206720 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0407 13:29:34.978992 1206720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0407 13:29:34.996027 1206720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0407 13:29:34.996628 1206720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0407 13:29:35.004669 1206720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0407 13:29:35.004693 1206720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 13:29:35.006599 1206720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0407 13:29:35.014363 1206720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0407 13:29:35.085912 1206720 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0407 13:29:35.085972 1206720 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0407 13:29:35.086054 1206720 ssh_runner.go:195] Run: which crictl
	I0407 13:29:35.141406 1206720 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0407 13:29:35.141478 1206720 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0407 13:29:35.141526 1206720 ssh_runner.go:195] Run: which crictl
	I0407 13:29:35.153779 1206720 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0407 13:29:35.153842 1206720 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0407 13:29:35.153897 1206720 ssh_runner.go:195] Run: which crictl
	I0407 13:29:35.185600 1206720 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0407 13:29:35.185667 1206720 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 13:29:35.185683 1206720 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0407 13:29:35.185746 1206720 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0407 13:29:35.185797 1206720 ssh_runner.go:195] Run: which crictl
	I0407 13:29:35.185809 1206720 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0407 13:29:35.185849 1206720 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0407 13:29:35.185919 1206720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0407 13:29:35.185926 1206720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0407 13:29:35.185940 1206720 ssh_runner.go:195] Run: which crictl
	I0407 13:29:35.185748 1206720 ssh_runner.go:195] Run: which crictl
	I0407 13:29:35.185745 1206720 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0407 13:29:35.186022 1206720 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0407 13:29:35.186028 1206720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0407 13:29:35.186056 1206720 ssh_runner.go:195] Run: which crictl
	I0407 13:29:35.258255 1206720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0407 13:29:35.258327 1206720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 13:29:35.258362 1206720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0407 13:29:35.258331 1206720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0407 13:29:35.277507 1206720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0407 13:29:35.277529 1206720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0407 13:29:35.277644 1206720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0407 13:29:35.400929 1206720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 13:29:35.400977 1206720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0407 13:29:35.419669 1206720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0407 13:29:35.419746 1206720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0407 13:29:35.448603 1206720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0407 13:29:35.455911 1206720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0407 13:29:35.455969 1206720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0407 13:29:35.559100 1206720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 13:29:35.559247 1206720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0407 13:29:35.568988 1206720 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0407 13:29:35.569108 1206720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0407 13:29:35.614493 1206720 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0407 13:29:35.622906 1206720 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0407 13:29:35.623018 1206720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0407 13:29:35.680918 1206720 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0407 13:29:35.680942 1206720 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0407 13:29:35.681005 1206720 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0407 13:29:35.703560 1206720 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0407 13:29:36.504350 1206720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 13:29:36.647645 1206720 cache_images.go:92] duration metric: took 1.81979062s to LoadCachedImages
	W0407 13:29:36.647769 1206720 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0407 13:29:36.647787 1206720 kubeadm.go:934] updating node { 192.168.39.211 8443 v1.20.0 crio true true} ...
	I0407 13:29:36.648067 1206720 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-435730 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-435730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 13:29:36.648185 1206720 ssh_runner.go:195] Run: crio config
	I0407 13:29:36.700774 1206720 cni.go:84] Creating CNI manager for ""
	I0407 13:29:36.700801 1206720 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 13:29:36.700814 1206720 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0407 13:29:36.700838 1206720 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.211 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-435730 NodeName:old-k8s-version-435730 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0407 13:29:36.700990 1206720 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-435730"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.211
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.211"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 13:29:36.701102 1206720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0407 13:29:36.712553 1206720 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 13:29:36.712647 1206720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 13:29:36.725514 1206720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0407 13:29:36.745734 1206720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 13:29:36.764465 1206720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0407 13:29:36.784025 1206720 ssh_runner.go:195] Run: grep 192.168.39.211	control-plane.minikube.internal$ /etc/hosts
	I0407 13:29:36.789365 1206720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:29:36.805248 1206720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:29:36.965627 1206720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:29:36.985431 1206720 certs.go:68] Setting up /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730 for IP: 192.168.39.211
	I0407 13:29:36.985472 1206720 certs.go:194] generating shared ca certs ...
	I0407 13:29:36.985516 1206720 certs.go:226] acquiring lock for ca certs: {Name:mk8e89191fca7f2111bdd08c345368f593b0d5d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:29:36.985760 1206720 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.key
	I0407 13:29:36.985823 1206720 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.key
	I0407 13:29:36.985839 1206720 certs.go:256] generating profile certs ...
	I0407 13:29:36.985926 1206720 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/client.key
	I0407 13:29:36.985963 1206720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/client.crt with IP's: []
	I0407 13:29:37.248216 1206720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/client.crt ...
	I0407 13:29:37.248255 1206720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/client.crt: {Name:mk3cdd3eb059e643128e83ffb648bc8e5f5e06eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:29:37.248463 1206720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/client.key ...
	I0407 13:29:37.248483 1206720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/client.key: {Name:mk20f511327f90d4ac4d61b37474f7384f6e2ee7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:29:37.248619 1206720 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/apiserver.key.7d731b07
	I0407 13:29:37.248649 1206720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/apiserver.crt.7d731b07 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.211]
	I0407 13:29:37.499314 1206720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/apiserver.crt.7d731b07 ...
	I0407 13:29:37.499357 1206720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/apiserver.crt.7d731b07: {Name:mkc782a9c7b44e5fc3216a2ec9f08c083ff0a48c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:29:37.499578 1206720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/apiserver.key.7d731b07 ...
	I0407 13:29:37.499596 1206720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/apiserver.key.7d731b07: {Name:mk8336111df6062c85f0a548e4bb2dcc742b302b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:29:37.499705 1206720 certs.go:381] copying /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/apiserver.crt.7d731b07 -> /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/apiserver.crt
	I0407 13:29:37.499804 1206720 certs.go:385] copying /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/apiserver.key.7d731b07 -> /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/apiserver.key
	I0407 13:29:37.499886 1206720 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/proxy-client.key
	I0407 13:29:37.499909 1206720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/proxy-client.crt with IP's: []
	I0407 13:29:37.779389 1206720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/proxy-client.crt ...
	I0407 13:29:37.779436 1206720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/proxy-client.crt: {Name:mk212e670bf841c8860d6efaca1fea6d1cacbc3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:29:37.779626 1206720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/proxy-client.key ...
	I0407 13:29:37.779642 1206720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/proxy-client.key: {Name:mkec86cff69d764c5dbad525d3ce7ccae5565641 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:29:37.779816 1206720 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716.pem (1338 bytes)
	W0407 13:29:37.779857 1206720 certs.go:480] ignoring /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716_empty.pem, impossibly tiny 0 bytes
	I0407 13:29:37.779865 1206720 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem (1675 bytes)
	I0407 13:29:37.779886 1206720 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem (1078 bytes)
	I0407 13:29:37.779906 1206720 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem (1123 bytes)
	I0407 13:29:37.779926 1206720 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem (1675 bytes)
	I0407 13:29:37.779964 1206720 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem (1708 bytes)
	I0407 13:29:37.780529 1206720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 13:29:37.818034 1206720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 13:29:37.847761 1206720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 13:29:37.881896 1206720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0407 13:29:37.919939 1206720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0407 13:29:37.947720 1206720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0407 13:29:37.974521 1206720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 13:29:38.002608 1206720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0407 13:29:38.030916 1206720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 13:29:38.060591 1206720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716.pem --> /usr/share/ca-certificates/1169716.pem (1338 bytes)
	I0407 13:29:38.087276 1206720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem --> /usr/share/ca-certificates/11697162.pem (1708 bytes)
	I0407 13:29:38.115268 1206720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 13:29:38.134524 1206720 ssh_runner.go:195] Run: openssl version
	I0407 13:29:38.140700 1206720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11697162.pem && ln -fs /usr/share/ca-certificates/11697162.pem /etc/ssl/certs/11697162.pem"
	I0407 13:29:38.152939 1206720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11697162.pem
	I0407 13:29:38.158568 1206720 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 12:22 /usr/share/ca-certificates/11697162.pem
	I0407 13:29:38.158658 1206720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11697162.pem
	I0407 13:29:38.165454 1206720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11697162.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 13:29:38.178649 1206720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 13:29:38.191377 1206720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:29:38.196993 1206720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:14 /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:29:38.197089 1206720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:29:38.203627 1206720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 13:29:38.215552 1206720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1169716.pem && ln -fs /usr/share/ca-certificates/1169716.pem /etc/ssl/certs/1169716.pem"
	I0407 13:29:38.226935 1206720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1169716.pem
	I0407 13:29:38.231926 1206720 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 12:22 /usr/share/ca-certificates/1169716.pem
	I0407 13:29:38.232012 1206720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1169716.pem
	I0407 13:29:38.238163 1206720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1169716.pem /etc/ssl/certs/51391683.0"
	I0407 13:29:38.250257 1206720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 13:29:38.255173 1206720 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0407 13:29:38.255273 1206720 kubeadm.go:392] StartCluster: {Name:old-k8s-version-435730 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-ver
sion-435730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:29:38.255382 1206720 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0407 13:29:38.255463 1206720 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0407 13:29:38.297804 1206720 cri.go:89] found id: ""
	I0407 13:29:38.297898 1206720 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0407 13:29:38.309895 1206720 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0407 13:29:38.322361 1206720 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 13:29:38.334411 1206720 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 13:29:38.334443 1206720 kubeadm.go:157] found existing configuration files:
	
	I0407 13:29:38.334506 1206720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 13:29:38.345811 1206720 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 13:29:38.345888 1206720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 13:29:38.356796 1206720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 13:29:38.366507 1206720 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 13:29:38.366582 1206720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 13:29:38.377100 1206720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 13:29:38.387477 1206720 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 13:29:38.387548 1206720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 13:29:38.398271 1206720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 13:29:38.410130 1206720 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 13:29:38.410218 1206720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 13:29:38.422390 1206720 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0407 13:29:38.560776 1206720 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0407 13:29:38.560868 1206720 kubeadm.go:310] [preflight] Running pre-flight checks
	I0407 13:29:38.715852 1206720 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0407 13:29:38.716058 1206720 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0407 13:29:38.716206 1206720 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0407 13:29:38.909274 1206720 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 13:29:38.911667 1206720 out.go:235]   - Generating certificates and keys ...
	I0407 13:29:38.911808 1206720 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0407 13:29:38.911923 1206720 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0407 13:29:39.091094 1206720 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0407 13:29:39.283525 1206720 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0407 13:29:39.413364 1206720 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0407 13:29:39.694986 1206720 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0407 13:29:39.846396 1206720 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0407 13:29:39.848193 1206720 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-435730] and IPs [192.168.39.211 127.0.0.1 ::1]
	I0407 13:29:39.928837 1206720 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0407 13:29:39.929085 1206720 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-435730] and IPs [192.168.39.211 127.0.0.1 ::1]
	I0407 13:29:40.096707 1206720 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0407 13:29:40.226194 1206720 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0407 13:29:40.386369 1206720 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0407 13:29:40.386752 1206720 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 13:29:40.476370 1206720 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 13:29:40.868146 1206720 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 13:29:41.223858 1206720 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 13:29:41.325409 1206720 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 13:29:41.354600 1206720 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 13:29:41.356003 1206720 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 13:29:41.356259 1206720 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0407 13:29:41.546020 1206720 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 13:29:41.550048 1206720 out.go:235]   - Booting up control plane ...
	I0407 13:29:41.550294 1206720 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 13:29:41.550439 1206720 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 13:29:41.550554 1206720 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 13:29:41.551567 1206720 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 13:29:41.557175 1206720 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0407 13:30:21.547856 1206720 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0407 13:30:21.548096 1206720 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:30:21.548390 1206720 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:30:26.549001 1206720 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:30:26.549330 1206720 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:30:36.549118 1206720 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:30:36.549392 1206720 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:30:56.548639 1206720 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:30:56.548905 1206720 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:31:36.549773 1206720 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:31:36.550080 1206720 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:31:36.550109 1206720 kubeadm.go:310] 
	I0407 13:31:36.550289 1206720 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0407 13:31:36.550360 1206720 kubeadm.go:310] 		timed out waiting for the condition
	I0407 13:31:36.550378 1206720 kubeadm.go:310] 
	I0407 13:31:36.550414 1206720 kubeadm.go:310] 	This error is likely caused by:
	I0407 13:31:36.550456 1206720 kubeadm.go:310] 		- The kubelet is not running
	I0407 13:31:36.550555 1206720 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0407 13:31:36.550563 1206720 kubeadm.go:310] 
	I0407 13:31:36.550643 1206720 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0407 13:31:36.550673 1206720 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0407 13:31:36.550701 1206720 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0407 13:31:36.550707 1206720 kubeadm.go:310] 
	I0407 13:31:36.550853 1206720 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0407 13:31:36.551010 1206720 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0407 13:31:36.551025 1206720 kubeadm.go:310] 
	I0407 13:31:36.551231 1206720 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0407 13:31:36.551374 1206720 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0407 13:31:36.551483 1206720 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0407 13:31:36.551585 1206720 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0407 13:31:36.551619 1206720 kubeadm.go:310] 
	I0407 13:31:36.551783 1206720 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0407 13:31:36.551924 1206720 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0407 13:31:36.552091 1206720 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0407 13:31:36.552224 1206720 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-435730] and IPs [192.168.39.211 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-435730] and IPs [192.168.39.211 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-435730] and IPs [192.168.39.211 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-435730] and IPs [192.168.39.211 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0407 13:31:36.552276 1206720 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0407 13:31:39.647067 1206720 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.094752316s)
	I0407 13:31:39.647176 1206720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:31:39.661376 1206720 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 13:31:39.671867 1206720 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 13:31:39.671891 1206720 kubeadm.go:157] found existing configuration files:
	
	I0407 13:31:39.671978 1206720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 13:31:39.682357 1206720 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 13:31:39.682439 1206720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 13:31:39.692933 1206720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 13:31:39.703192 1206720 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 13:31:39.703282 1206720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 13:31:39.716110 1206720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 13:31:39.727995 1206720 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 13:31:39.728081 1206720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 13:31:39.738829 1206720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 13:31:39.749268 1206720 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 13:31:39.749366 1206720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 13:31:39.760373 1206720 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0407 13:31:40.016833 1206720 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0407 13:33:35.983488 1206720 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0407 13:33:35.983622 1206720 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0407 13:33:35.985329 1206720 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0407 13:33:35.985403 1206720 kubeadm.go:310] [preflight] Running pre-flight checks
	I0407 13:33:35.985523 1206720 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0407 13:33:35.985698 1206720 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0407 13:33:35.985849 1206720 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0407 13:33:35.985943 1206720 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 13:33:35.988815 1206720 out.go:235]   - Generating certificates and keys ...
	I0407 13:33:35.988955 1206720 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0407 13:33:35.989172 1206720 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0407 13:33:35.989272 1206720 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0407 13:33:35.989326 1206720 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0407 13:33:35.989381 1206720 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0407 13:33:35.989426 1206720 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0407 13:33:35.989482 1206720 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0407 13:33:35.989531 1206720 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0407 13:33:35.989592 1206720 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0407 13:33:35.989680 1206720 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0407 13:33:35.989755 1206720 kubeadm.go:310] [certs] Using the existing "sa" key
	I0407 13:33:35.989828 1206720 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 13:33:35.989894 1206720 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 13:33:35.989950 1206720 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 13:33:35.990034 1206720 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 13:33:35.990106 1206720 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 13:33:35.990240 1206720 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 13:33:35.990347 1206720 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 13:33:35.990387 1206720 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0407 13:33:35.990447 1206720 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 13:33:35.992917 1206720 out.go:235]   - Booting up control plane ...
	I0407 13:33:35.993087 1206720 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 13:33:35.993218 1206720 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 13:33:35.993309 1206720 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 13:33:35.993619 1206720 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 13:33:35.994137 1206720 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0407 13:33:35.994228 1206720 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0407 13:33:35.994390 1206720 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:33:35.994623 1206720 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:33:35.994721 1206720 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:33:35.995119 1206720 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:33:35.995280 1206720 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:33:35.995555 1206720 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:33:35.995642 1206720 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:33:35.996034 1206720 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:33:35.996146 1206720 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:33:35.996474 1206720 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:33:35.996493 1206720 kubeadm.go:310] 
	I0407 13:33:35.996546 1206720 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0407 13:33:35.996596 1206720 kubeadm.go:310] 		timed out waiting for the condition
	I0407 13:33:35.996606 1206720 kubeadm.go:310] 
	I0407 13:33:35.996652 1206720 kubeadm.go:310] 	This error is likely caused by:
	I0407 13:33:35.996725 1206720 kubeadm.go:310] 		- The kubelet is not running
	I0407 13:33:35.996854 1206720 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0407 13:33:35.996868 1206720 kubeadm.go:310] 
	I0407 13:33:35.997004 1206720 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0407 13:33:35.997050 1206720 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0407 13:33:35.997092 1206720 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0407 13:33:35.997101 1206720 kubeadm.go:310] 
	I0407 13:33:35.997231 1206720 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0407 13:33:35.997363 1206720 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0407 13:33:35.997390 1206720 kubeadm.go:310] 
	I0407 13:33:35.997572 1206720 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0407 13:33:35.997696 1206720 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0407 13:33:35.997842 1206720 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0407 13:33:35.997967 1206720 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0407 13:33:35.998064 1206720 kubeadm.go:310] 
	I0407 13:33:35.998078 1206720 kubeadm.go:394] duration metric: took 3m57.742805034s to StartCluster
	I0407 13:33:35.998139 1206720 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:33:35.998243 1206720 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:33:36.050376 1206720 cri.go:89] found id: ""
	I0407 13:33:36.050447 1206720 logs.go:282] 0 containers: []
	W0407 13:33:36.050463 1206720 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:33:36.050472 1206720 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:33:36.050551 1206720 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:33:36.101420 1206720 cri.go:89] found id: ""
	I0407 13:33:36.101455 1206720 logs.go:282] 0 containers: []
	W0407 13:33:36.101469 1206720 logs.go:284] No container was found matching "etcd"
	I0407 13:33:36.101477 1206720 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:33:36.101549 1206720 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:33:36.157805 1206720 cri.go:89] found id: ""
	I0407 13:33:36.157852 1206720 logs.go:282] 0 containers: []
	W0407 13:33:36.157866 1206720 logs.go:284] No container was found matching "coredns"
	I0407 13:33:36.157876 1206720 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:33:36.158005 1206720 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:33:36.205052 1206720 cri.go:89] found id: ""
	I0407 13:33:36.205094 1206720 logs.go:282] 0 containers: []
	W0407 13:33:36.205115 1206720 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:33:36.205124 1206720 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:33:36.205200 1206720 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:33:36.247127 1206720 cri.go:89] found id: ""
	I0407 13:33:36.247161 1206720 logs.go:282] 0 containers: []
	W0407 13:33:36.247170 1206720 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:33:36.247177 1206720 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:33:36.247241 1206720 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:33:36.287879 1206720 cri.go:89] found id: ""
	I0407 13:33:36.287923 1206720 logs.go:282] 0 containers: []
	W0407 13:33:36.287935 1206720 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:33:36.287943 1206720 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:33:36.288037 1206720 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:33:36.330634 1206720 cri.go:89] found id: ""
	I0407 13:33:36.330673 1206720 logs.go:282] 0 containers: []
	W0407 13:33:36.330686 1206720 logs.go:284] No container was found matching "kindnet"
	I0407 13:33:36.330700 1206720 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:33:36.330717 1206720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:33:36.459639 1206720 logs.go:123] Gathering logs for container status ...
	I0407 13:33:36.459689 1206720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:33:36.520494 1206720 logs.go:123] Gathering logs for kubelet ...
	I0407 13:33:36.520542 1206720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:33:36.581038 1206720 logs.go:123] Gathering logs for dmesg ...
	I0407 13:33:36.581103 1206720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:33:36.603634 1206720 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:33:36.603680 1206720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:33:36.765987 1206720 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0407 13:33:36.766018 1206720 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0407 13:33:36.766097 1206720 out.go:270] * 
	* 
	W0407 13:33:36.766173 1206720 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0407 13:33:36.766215 1206720 out.go:270] * 
	* 
	W0407 13:33:36.767134 1206720 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0407 13:33:36.770832 1206720 out.go:201] 
	W0407 13:33:36.772620 1206720 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0407 13:33:36.772703 1206720 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0407 13:33:36.772730 1206720 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0407 13:33:36.774760 1206720 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-435730 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-435730 -n old-k8s-version-435730
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-435730 -n old-k8s-version-435730: exit status 6 (260.012309ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0407 13:33:37.094397 1212774 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-435730" does not appear in /home/jenkins/minikube-integration/20602-1162386/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-435730" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (274.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-435730 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context old-k8s-version-435730 create -f testdata/busybox.yaml: exit status 1 (55.241558ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-435730" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context old-k8s-version-435730 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-435730 -n old-k8s-version-435730
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-435730 -n old-k8s-version-435730: exit status 6 (275.532556ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0407 13:33:37.420705 1212814 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-435730" does not appear in /home/jenkins/minikube-integration/20602-1162386/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-435730" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-435730 -n old-k8s-version-435730
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-435730 -n old-k8s-version-435730: exit status 6 (273.441476ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0407 13:33:37.698599 1212844 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-435730" does not appear in /home/jenkins/minikube-integration/20602-1162386/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-435730" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (78.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-435730 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-435730 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m17.95979183s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-435730 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-435730 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-435730 describe deploy/metrics-server -n kube-system: exit status 1 (51.983842ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-435730" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-435730 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-435730 -n old-k8s-version-435730
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-435730 -n old-k8s-version-435730: exit status 6 (262.290797ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0407 13:34:55.977584 1213774 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-435730" does not appear in /home/jenkins/minikube-integration/20602-1162386/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-435730" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (78.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (512.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-435730 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-435730 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (8m30.620744035s)

                                                
                                                
-- stdout --
	* [old-k8s-version-435730] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20602-1162386/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1162386/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-435730" primary control-plane node in "old-k8s-version-435730" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-435730" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:35:02.584782 1213906 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:35:02.584932 1213906 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:35:02.584945 1213906 out.go:358] Setting ErrFile to fd 2...
	I0407 13:35:02.584951 1213906 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:35:02.585264 1213906 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1162386/.minikube/bin
	I0407 13:35:02.586205 1213906 out.go:352] Setting JSON to false
	I0407 13:35:02.587709 1213906 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":19047,"bootTime":1744013856,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 13:35:02.587821 1213906 start.go:139] virtualization: kvm guest
	I0407 13:35:02.590566 1213906 out.go:177] * [old-k8s-version-435730] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 13:35:02.592838 1213906 out.go:177]   - MINIKUBE_LOCATION=20602
	I0407 13:35:02.592838 1213906 notify.go:220] Checking for updates...
	I0407 13:35:02.596470 1213906 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 13:35:02.599833 1213906 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20602-1162386/kubeconfig
	I0407 13:35:02.601692 1213906 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1162386/.minikube
	I0407 13:35:02.603566 1213906 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 13:35:02.605178 1213906 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 13:35:02.607695 1213906 config.go:182] Loaded profile config "old-k8s-version-435730": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0407 13:35:02.608288 1213906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:35:02.608378 1213906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:35:02.628192 1213906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39235
	I0407 13:35:02.628756 1213906 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:35:02.629456 1213906 main.go:141] libmachine: Using API Version  1
	I0407 13:35:02.629488 1213906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:35:02.629956 1213906 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:35:02.630203 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .DriverName
	I0407 13:35:02.632794 1213906 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	I0407 13:35:02.634805 1213906 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 13:35:02.635283 1213906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:35:02.635447 1213906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:35:02.655837 1213906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44895
	I0407 13:35:02.656679 1213906 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:35:02.657560 1213906 main.go:141] libmachine: Using API Version  1
	I0407 13:35:02.657597 1213906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:35:02.658245 1213906 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:35:02.658503 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .DriverName
	I0407 13:35:02.704321 1213906 out.go:177] * Using the kvm2 driver based on existing profile
	I0407 13:35:02.706665 1213906 start.go:297] selected driver: kvm2
	I0407 13:35:02.706700 1213906 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-435730 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 Clust
erName:old-k8s-version-435730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString
:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:35:02.706894 1213906 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 13:35:02.707904 1213906 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 13:35:02.708120 1213906 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20602-1162386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0407 13:35:02.727765 1213906 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0407 13:35:02.728563 1213906 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 13:35:02.728636 1213906 cni.go:84] Creating CNI manager for ""
	I0407 13:35:02.728692 1213906 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 13:35:02.728786 1213906 start.go:340] cluster config:
	{Name:old-k8s-version-435730 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-435730 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:35:02.728970 1213906 iso.go:125] acquiring lock: {Name:mk51e1827709f7a3810dbd898083f8185ece65eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 13:35:02.731825 1213906 out.go:177] * Starting "old-k8s-version-435730" primary control-plane node in "old-k8s-version-435730" cluster
	I0407 13:35:02.733847 1213906 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0407 13:35:02.733918 1213906 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0407 13:35:02.733932 1213906 cache.go:56] Caching tarball of preloaded images
	I0407 13:35:02.734056 1213906 preload.go:172] Found /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0407 13:35:02.734075 1213906 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0407 13:35:02.734225 1213906 profile.go:143] Saving config to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/config.json ...
	I0407 13:35:02.734529 1213906 start.go:360] acquireMachinesLock for old-k8s-version-435730: {Name:mk51d4c744fa92d56cf6ac11b1e792c85ef6709a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 13:35:02.734590 1213906 start.go:364] duration metric: took 34.184µs to acquireMachinesLock for "old-k8s-version-435730"
	I0407 13:35:02.734610 1213906 start.go:96] Skipping create...Using existing machine configuration
	I0407 13:35:02.734618 1213906 fix.go:54] fixHost starting: 
	I0407 13:35:02.735017 1213906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:35:02.735071 1213906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:35:02.752558 1213906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45505
	I0407 13:35:02.753187 1213906 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:35:02.753801 1213906 main.go:141] libmachine: Using API Version  1
	I0407 13:35:02.753826 1213906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:35:02.754273 1213906 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:35:02.754526 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .DriverName
	I0407 13:35:02.754744 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetState
	I0407 13:35:02.756928 1213906 fix.go:112] recreateIfNeeded on old-k8s-version-435730: state=Stopped err=<nil>
	I0407 13:35:02.756962 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .DriverName
	W0407 13:35:02.757461 1213906 fix.go:138] unexpected machine state, will restart: <nil>
	I0407 13:35:02.759787 1213906 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-435730" ...
	I0407 13:35:02.761748 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .Start
	I0407 13:35:02.762075 1213906 main.go:141] libmachine: (old-k8s-version-435730) starting domain...
	I0407 13:35:02.762101 1213906 main.go:141] libmachine: (old-k8s-version-435730) ensuring networks are active...
	I0407 13:35:02.763555 1213906 main.go:141] libmachine: (old-k8s-version-435730) Ensuring network default is active
	I0407 13:35:02.764122 1213906 main.go:141] libmachine: (old-k8s-version-435730) Ensuring network mk-old-k8s-version-435730 is active
	I0407 13:35:02.764655 1213906 main.go:141] libmachine: (old-k8s-version-435730) getting domain XML...
	I0407 13:35:02.765684 1213906 main.go:141] libmachine: (old-k8s-version-435730) creating domain...
	I0407 13:35:04.353178 1213906 main.go:141] libmachine: (old-k8s-version-435730) waiting for IP...
	I0407 13:35:04.354358 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:04.354990 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | unable to find current IP address of domain old-k8s-version-435730 in network mk-old-k8s-version-435730
	I0407 13:35:04.355127 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | I0407 13:35:04.354984 1213958 retry.go:31] will retry after 218.432262ms: waiting for domain to come up
	I0407 13:35:04.575755 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:04.576452 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | unable to find current IP address of domain old-k8s-version-435730 in network mk-old-k8s-version-435730
	I0407 13:35:04.576484 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | I0407 13:35:04.576405 1213958 retry.go:31] will retry after 353.43135ms: waiting for domain to come up
	I0407 13:35:04.932380 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:04.933439 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | unable to find current IP address of domain old-k8s-version-435730 in network mk-old-k8s-version-435730
	I0407 13:35:04.933501 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | I0407 13:35:04.933396 1213958 retry.go:31] will retry after 380.214979ms: waiting for domain to come up
	I0407 13:35:05.315172 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:05.316095 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | unable to find current IP address of domain old-k8s-version-435730 in network mk-old-k8s-version-435730
	I0407 13:35:05.316127 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | I0407 13:35:05.316024 1213958 retry.go:31] will retry after 456.871873ms: waiting for domain to come up
	I0407 13:35:05.775076 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:05.775710 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | unable to find current IP address of domain old-k8s-version-435730 in network mk-old-k8s-version-435730
	I0407 13:35:05.775744 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | I0407 13:35:05.775662 1213958 retry.go:31] will retry after 747.304391ms: waiting for domain to come up
	I0407 13:35:06.525261 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:06.525870 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | unable to find current IP address of domain old-k8s-version-435730 in network mk-old-k8s-version-435730
	I0407 13:35:06.525907 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | I0407 13:35:06.525830 1213958 retry.go:31] will retry after 676.921667ms: waiting for domain to come up
	I0407 13:35:07.205332 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:07.206866 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | unable to find current IP address of domain old-k8s-version-435730 in network mk-old-k8s-version-435730
	I0407 13:35:07.206906 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | I0407 13:35:07.206790 1213958 retry.go:31] will retry after 863.984802ms: waiting for domain to come up
	I0407 13:35:08.073045 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:08.074037 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | unable to find current IP address of domain old-k8s-version-435730 in network mk-old-k8s-version-435730
	I0407 13:35:08.074116 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | I0407 13:35:08.073996 1213958 retry.go:31] will retry after 1.147049325s: waiting for domain to come up
	I0407 13:35:09.222697 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:09.223375 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | unable to find current IP address of domain old-k8s-version-435730 in network mk-old-k8s-version-435730
	I0407 13:35:09.223406 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | I0407 13:35:09.223354 1213958 retry.go:31] will retry after 1.254481277s: waiting for domain to come up
	I0407 13:35:10.480181 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:10.480721 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | unable to find current IP address of domain old-k8s-version-435730 in network mk-old-k8s-version-435730
	I0407 13:35:10.480749 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | I0407 13:35:10.480660 1213958 retry.go:31] will retry after 2.230895895s: waiting for domain to come up
	I0407 13:35:12.714350 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:12.715245 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | unable to find current IP address of domain old-k8s-version-435730 in network mk-old-k8s-version-435730
	I0407 13:35:12.715287 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | I0407 13:35:12.715126 1213958 retry.go:31] will retry after 2.700085459s: waiting for domain to come up
	I0407 13:35:15.418904 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:15.419758 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | unable to find current IP address of domain old-k8s-version-435730 in network mk-old-k8s-version-435730
	I0407 13:35:15.419789 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | I0407 13:35:15.419721 1213958 retry.go:31] will retry after 2.45468189s: waiting for domain to come up
	I0407 13:35:17.876446 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:17.877095 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | unable to find current IP address of domain old-k8s-version-435730 in network mk-old-k8s-version-435730
	I0407 13:35:17.877131 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | I0407 13:35:17.877033 1213958 retry.go:31] will retry after 4.288005081s: waiting for domain to come up
	I0407 13:35:22.170014 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:22.170763 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has current primary IP address 192.168.39.211 and MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:22.170792 1213906 main.go:141] libmachine: (old-k8s-version-435730) found domain IP: 192.168.39.211
	I0407 13:35:22.170801 1213906 main.go:141] libmachine: (old-k8s-version-435730) reserving static IP address...
	I0407 13:35:22.171443 1213906 main.go:141] libmachine: (old-k8s-version-435730) reserved static IP address 192.168.39.211 for domain old-k8s-version-435730
	I0407 13:35:22.171474 1213906 main.go:141] libmachine: (old-k8s-version-435730) waiting for SSH...
	I0407 13:35:22.171563 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | found host DHCP lease matching {name: "old-k8s-version-435730", mac: "52:54:00:e3:da:b2", ip: "192.168.39.211"} in network mk-old-k8s-version-435730: {Iface:virbr1 ExpiryTime:2025-04-07 14:35:15 +0000 UTC Type:0 Mac:52:54:00:e3:da:b2 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-435730 Clientid:01:52:54:00:e3:da:b2}
	I0407 13:35:22.171602 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | skip adding static IP to network mk-old-k8s-version-435730 - found existing host DHCP lease matching {name: "old-k8s-version-435730", mac: "52:54:00:e3:da:b2", ip: "192.168.39.211"}
	I0407 13:35:22.171659 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | Getting to WaitForSSH function...
	I0407 13:35:22.175281 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:22.175967 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:da:b2", ip: ""} in network mk-old-k8s-version-435730: {Iface:virbr1 ExpiryTime:2025-04-07 14:35:15 +0000 UTC Type:0 Mac:52:54:00:e3:da:b2 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-435730 Clientid:01:52:54:00:e3:da:b2}
	I0407 13:35:22.176014 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined IP address 192.168.39.211 and MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:22.176315 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | Using SSH client type: external
	I0407 13:35:22.176342 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | Using SSH private key: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/old-k8s-version-435730/id_rsa (-rw-------)
	I0407 13:35:22.176361 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/old-k8s-version-435730/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0407 13:35:22.176379 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | About to run SSH command:
	I0407 13:35:22.176386 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | exit 0
	I0407 13:35:22.310519 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | SSH cmd err, output: <nil>: 
	I0407 13:35:22.310911 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetConfigRaw
	I0407 13:35:22.311593 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetIP
	I0407 13:35:22.314925 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:22.315393 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:da:b2", ip: ""} in network mk-old-k8s-version-435730: {Iface:virbr1 ExpiryTime:2025-04-07 14:35:15 +0000 UTC Type:0 Mac:52:54:00:e3:da:b2 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-435730 Clientid:01:52:54:00:e3:da:b2}
	I0407 13:35:22.315435 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined IP address 192.168.39.211 and MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:22.315711 1213906 profile.go:143] Saving config to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/config.json ...
	I0407 13:35:22.315939 1213906 machine.go:93] provisionDockerMachine start ...
	I0407 13:35:22.315961 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .DriverName
	I0407 13:35:22.316262 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHHostname
	I0407 13:35:22.319500 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:22.319914 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:da:b2", ip: ""} in network mk-old-k8s-version-435730: {Iface:virbr1 ExpiryTime:2025-04-07 14:35:15 +0000 UTC Type:0 Mac:52:54:00:e3:da:b2 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-435730 Clientid:01:52:54:00:e3:da:b2}
	I0407 13:35:22.319939 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined IP address 192.168.39.211 and MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:22.320224 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHPort
	I0407 13:35:22.320452 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHKeyPath
	I0407 13:35:22.320705 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHKeyPath
	I0407 13:35:22.320908 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHUsername
	I0407 13:35:22.321141 1213906 main.go:141] libmachine: Using SSH client type: native
	I0407 13:35:22.321392 1213906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0407 13:35:22.321404 1213906 main.go:141] libmachine: About to run SSH command:
	hostname
	I0407 13:35:22.438803 1213906 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0407 13:35:22.438837 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetMachineName
	I0407 13:35:22.439213 1213906 buildroot.go:166] provisioning hostname "old-k8s-version-435730"
	I0407 13:35:22.439244 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetMachineName
	I0407 13:35:22.439541 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHHostname
	I0407 13:35:22.442887 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:22.443433 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:da:b2", ip: ""} in network mk-old-k8s-version-435730: {Iface:virbr1 ExpiryTime:2025-04-07 14:35:15 +0000 UTC Type:0 Mac:52:54:00:e3:da:b2 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-435730 Clientid:01:52:54:00:e3:da:b2}
	I0407 13:35:22.443462 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined IP address 192.168.39.211 and MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:22.443797 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHPort
	I0407 13:35:22.444061 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHKeyPath
	I0407 13:35:22.444401 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHKeyPath
	I0407 13:35:22.444611 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHUsername
	I0407 13:35:22.444904 1213906 main.go:141] libmachine: Using SSH client type: native
	I0407 13:35:22.445175 1213906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0407 13:35:22.445202 1213906 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-435730 && echo "old-k8s-version-435730" | sudo tee /etc/hostname
	I0407 13:35:22.577398 1213906 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-435730
	
	I0407 13:35:22.577434 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHHostname
	I0407 13:35:22.580691 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:22.581167 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:da:b2", ip: ""} in network mk-old-k8s-version-435730: {Iface:virbr1 ExpiryTime:2025-04-07 14:35:15 +0000 UTC Type:0 Mac:52:54:00:e3:da:b2 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-435730 Clientid:01:52:54:00:e3:da:b2}
	I0407 13:35:22.581211 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined IP address 192.168.39.211 and MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:22.581521 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHPort
	I0407 13:35:22.581782 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHKeyPath
	I0407 13:35:22.582007 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHKeyPath
	I0407 13:35:22.582184 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHUsername
	I0407 13:35:22.582377 1213906 main.go:141] libmachine: Using SSH client type: native
	I0407 13:35:22.582621 1213906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0407 13:35:22.582640 1213906 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-435730' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-435730/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-435730' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 13:35:22.709430 1213906 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 13:35:22.709479 1213906 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20602-1162386/.minikube CaCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20602-1162386/.minikube}
	I0407 13:35:22.709535 1213906 buildroot.go:174] setting up certificates
	I0407 13:35:22.709556 1213906 provision.go:84] configureAuth start
	I0407 13:35:22.709574 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetMachineName
	I0407 13:35:22.710012 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetIP
	I0407 13:35:22.713939 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:22.714519 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:da:b2", ip: ""} in network mk-old-k8s-version-435730: {Iface:virbr1 ExpiryTime:2025-04-07 14:35:15 +0000 UTC Type:0 Mac:52:54:00:e3:da:b2 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-435730 Clientid:01:52:54:00:e3:da:b2}
	I0407 13:35:22.714541 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined IP address 192.168.39.211 and MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:22.714894 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHHostname
	I0407 13:35:22.718128 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:22.718615 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:da:b2", ip: ""} in network mk-old-k8s-version-435730: {Iface:virbr1 ExpiryTime:2025-04-07 14:35:15 +0000 UTC Type:0 Mac:52:54:00:e3:da:b2 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-435730 Clientid:01:52:54:00:e3:da:b2}
	I0407 13:35:22.718669 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined IP address 192.168.39.211 and MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:22.718920 1213906 provision.go:143] copyHostCerts
	I0407 13:35:22.719005 1213906 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem, removing ...
	I0407 13:35:22.719019 1213906 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem
	I0407 13:35:22.719107 1213906 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem (1078 bytes)
	I0407 13:35:22.719303 1213906 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem, removing ...
	I0407 13:35:22.719318 1213906 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem
	I0407 13:35:22.719355 1213906 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem (1123 bytes)
	I0407 13:35:22.719452 1213906 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem, removing ...
	I0407 13:35:22.719462 1213906 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem
	I0407 13:35:22.719498 1213906 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem (1675 bytes)
	I0407 13:35:22.719584 1213906 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-435730 san=[127.0.0.1 192.168.39.211 localhost minikube old-k8s-version-435730]
	I0407 13:35:22.975238 1213906 provision.go:177] copyRemoteCerts
	I0407 13:35:22.975313 1213906 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 13:35:22.975344 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHHostname
	I0407 13:35:22.979698 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:22.980537 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:da:b2", ip: ""} in network mk-old-k8s-version-435730: {Iface:virbr1 ExpiryTime:2025-04-07 14:35:15 +0000 UTC Type:0 Mac:52:54:00:e3:da:b2 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-435730 Clientid:01:52:54:00:e3:da:b2}
	I0407 13:35:22.980586 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined IP address 192.168.39.211 and MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:22.980760 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHPort
	I0407 13:35:22.981005 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHKeyPath
	I0407 13:35:22.981200 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHUsername
	I0407 13:35:22.981438 1213906 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/old-k8s-version-435730/id_rsa Username:docker}
	I0407 13:35:23.073483 1213906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0407 13:35:23.102949 1213906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 13:35:23.130613 1213906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0407 13:35:23.157404 1213906 provision.go:87] duration metric: took 447.82344ms to configureAuth
	I0407 13:35:23.157437 1213906 buildroot.go:189] setting minikube options for container-runtime
	I0407 13:35:23.157633 1213906 config.go:182] Loaded profile config "old-k8s-version-435730": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0407 13:35:23.157740 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHHostname
	I0407 13:35:23.161972 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:23.162414 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:da:b2", ip: ""} in network mk-old-k8s-version-435730: {Iface:virbr1 ExpiryTime:2025-04-07 14:35:15 +0000 UTC Type:0 Mac:52:54:00:e3:da:b2 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-435730 Clientid:01:52:54:00:e3:da:b2}
	I0407 13:35:23.162449 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined IP address 192.168.39.211 and MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:23.162658 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHPort
	I0407 13:35:23.162928 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHKeyPath
	I0407 13:35:23.163183 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHKeyPath
	I0407 13:35:23.163418 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHUsername
	I0407 13:35:23.163660 1213906 main.go:141] libmachine: Using SSH client type: native
	I0407 13:35:23.164007 1213906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0407 13:35:23.164032 1213906 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0407 13:35:23.409353 1213906 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0407 13:35:23.409384 1213906 machine.go:96] duration metric: took 1.093430593s to provisionDockerMachine
	I0407 13:35:23.409399 1213906 start.go:293] postStartSetup for "old-k8s-version-435730" (driver="kvm2")
	I0407 13:35:23.409414 1213906 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 13:35:23.409445 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .DriverName
	I0407 13:35:23.409858 1213906 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 13:35:23.409904 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHHostname
	I0407 13:35:23.413038 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:23.413470 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:da:b2", ip: ""} in network mk-old-k8s-version-435730: {Iface:virbr1 ExpiryTime:2025-04-07 14:35:15 +0000 UTC Type:0 Mac:52:54:00:e3:da:b2 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-435730 Clientid:01:52:54:00:e3:da:b2}
	I0407 13:35:23.413500 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined IP address 192.168.39.211 and MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:23.413846 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHPort
	I0407 13:35:23.414100 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHKeyPath
	I0407 13:35:23.414243 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHUsername
	I0407 13:35:23.414348 1213906 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/old-k8s-version-435730/id_rsa Username:docker}
	I0407 13:35:23.506708 1213906 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 13:35:23.511599 1213906 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 13:35:23.511634 1213906 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-1162386/.minikube/addons for local assets ...
	I0407 13:35:23.511699 1213906 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-1162386/.minikube/files for local assets ...
	I0407 13:35:23.511780 1213906 filesync.go:149] local asset: /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem -> 11697162.pem in /etc/ssl/certs
	I0407 13:35:23.511897 1213906 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 13:35:23.523607 1213906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem --> /etc/ssl/certs/11697162.pem (1708 bytes)
	I0407 13:35:23.551197 1213906 start.go:296] duration metric: took 141.766584ms for postStartSetup
	I0407 13:35:23.551262 1213906 fix.go:56] duration metric: took 20.81664523s for fixHost
	I0407 13:35:23.551289 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHHostname
	I0407 13:35:23.554708 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:23.555130 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:da:b2", ip: ""} in network mk-old-k8s-version-435730: {Iface:virbr1 ExpiryTime:2025-04-07 14:35:15 +0000 UTC Type:0 Mac:52:54:00:e3:da:b2 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-435730 Clientid:01:52:54:00:e3:da:b2}
	I0407 13:35:23.555178 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined IP address 192.168.39.211 and MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:23.555458 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHPort
	I0407 13:35:23.555742 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHKeyPath
	I0407 13:35:23.555913 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHKeyPath
	I0407 13:35:23.556098 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHUsername
	I0407 13:35:23.556351 1213906 main.go:141] libmachine: Using SSH client type: native
	I0407 13:35:23.557211 1213906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0407 13:35:23.557264 1213906 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0407 13:35:23.675597 1213906 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744032923.626598856
	
	I0407 13:35:23.675632 1213906 fix.go:216] guest clock: 1744032923.626598856
	I0407 13:35:23.675644 1213906 fix.go:229] Guest: 2025-04-07 13:35:23.626598856 +0000 UTC Remote: 2025-04-07 13:35:23.551267855 +0000 UTC m=+21.010987210 (delta=75.331001ms)
	I0407 13:35:23.675672 1213906 fix.go:200] guest clock delta is within tolerance: 75.331001ms
	I0407 13:35:23.675680 1213906 start.go:83] releasing machines lock for "old-k8s-version-435730", held for 20.941076179s
	I0407 13:35:23.675711 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .DriverName
	I0407 13:35:23.676122 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetIP
	I0407 13:35:23.679731 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:23.680205 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:da:b2", ip: ""} in network mk-old-k8s-version-435730: {Iface:virbr1 ExpiryTime:2025-04-07 14:35:15 +0000 UTC Type:0 Mac:52:54:00:e3:da:b2 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-435730 Clientid:01:52:54:00:e3:da:b2}
	I0407 13:35:23.680257 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined IP address 192.168.39.211 and MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:23.680478 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .DriverName
	I0407 13:35:23.681282 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .DriverName
	I0407 13:35:23.681536 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .DriverName
	I0407 13:35:23.681673 1213906 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0407 13:35:23.681751 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHHostname
	I0407 13:35:23.681814 1213906 ssh_runner.go:195] Run: cat /version.json
	I0407 13:35:23.681846 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHHostname
	I0407 13:35:23.684850 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:23.685167 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:23.685211 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:da:b2", ip: ""} in network mk-old-k8s-version-435730: {Iface:virbr1 ExpiryTime:2025-04-07 14:35:15 +0000 UTC Type:0 Mac:52:54:00:e3:da:b2 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-435730 Clientid:01:52:54:00:e3:da:b2}
	I0407 13:35:23.685238 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined IP address 192.168.39.211 and MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:23.685502 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHPort
	I0407 13:35:23.685603 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:da:b2", ip: ""} in network mk-old-k8s-version-435730: {Iface:virbr1 ExpiryTime:2025-04-07 14:35:15 +0000 UTC Type:0 Mac:52:54:00:e3:da:b2 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-435730 Clientid:01:52:54:00:e3:da:b2}
	I0407 13:35:23.685626 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined IP address 192.168.39.211 and MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:23.685684 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHKeyPath
	I0407 13:35:23.685867 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHPort
	I0407 13:35:23.685948 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHUsername
	I0407 13:35:23.686123 1213906 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/old-k8s-version-435730/id_rsa Username:docker}
	I0407 13:35:23.686136 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHKeyPath
	I0407 13:35:23.686315 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetSSHUsername
	I0407 13:35:23.686553 1213906 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/old-k8s-version-435730/id_rsa Username:docker}
	I0407 13:35:23.771960 1213906 ssh_runner.go:195] Run: systemctl --version
	I0407 13:35:23.794910 1213906 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0407 13:35:23.951017 1213906 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0407 13:35:23.957831 1213906 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 13:35:23.957941 1213906 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 13:35:23.977491 1213906 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0407 13:35:23.977526 1213906 start.go:495] detecting cgroup driver to use...
	I0407 13:35:23.977621 1213906 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 13:35:23.996015 1213906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 13:35:24.013289 1213906 docker.go:217] disabling cri-docker service (if available) ...
	I0407 13:35:24.013370 1213906 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0407 13:35:24.029415 1213906 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0407 13:35:24.046513 1213906 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0407 13:35:24.172899 1213906 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0407 13:35:24.360180 1213906 docker.go:233] disabling docker service ...
	I0407 13:35:24.360268 1213906 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0407 13:35:24.377134 1213906 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0407 13:35:24.392397 1213906 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0407 13:35:24.515260 1213906 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0407 13:35:24.638597 1213906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0407 13:35:24.655144 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 13:35:24.675366 1213906 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0407 13:35:24.675435 1213906 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:35:24.688011 1213906 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0407 13:35:24.688101 1213906 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:35:24.700046 1213906 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:35:24.714360 1213906 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:35:24.727594 1213906 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 13:35:24.741103 1213906 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 13:35:24.753377 1213906 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0407 13:35:24.753482 1213906 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0407 13:35:24.769158 1213906 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 13:35:24.780959 1213906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:35:24.907566 1213906 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0407 13:35:25.013675 1213906 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0407 13:35:25.013795 1213906 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0407 13:35:25.019841 1213906 start.go:563] Will wait 60s for crictl version
	I0407 13:35:25.019913 1213906 ssh_runner.go:195] Run: which crictl
	I0407 13:35:25.024706 1213906 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 13:35:25.062925 1213906 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0407 13:35:25.063041 1213906 ssh_runner.go:195] Run: crio --version
	I0407 13:35:25.096739 1213906 ssh_runner.go:195] Run: crio --version
	I0407 13:35:25.141528 1213906 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0407 13:35:25.143196 1213906 main.go:141] libmachine: (old-k8s-version-435730) Calling .GetIP
	I0407 13:35:25.147250 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:25.147783 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:da:b2", ip: ""} in network mk-old-k8s-version-435730: {Iface:virbr1 ExpiryTime:2025-04-07 14:35:15 +0000 UTC Type:0 Mac:52:54:00:e3:da:b2 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:old-k8s-version-435730 Clientid:01:52:54:00:e3:da:b2}
	I0407 13:35:25.147813 1213906 main.go:141] libmachine: (old-k8s-version-435730) DBG | domain old-k8s-version-435730 has defined IP address 192.168.39.211 and MAC address 52:54:00:e3:da:b2 in network mk-old-k8s-version-435730
	I0407 13:35:25.148092 1213906 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0407 13:35:25.152776 1213906 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:35:25.165893 1213906 kubeadm.go:883] updating cluster {Name:old-k8s-version-435730 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-
version-435730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 13:35:25.166024 1213906 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0407 13:35:25.166083 1213906 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 13:35:25.217125 1213906 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0407 13:35:25.217204 1213906 ssh_runner.go:195] Run: which lz4
	I0407 13:35:25.224170 1213906 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0407 13:35:25.229568 1213906 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0407 13:35:25.229616 1213906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0407 13:35:26.980690 1213906 crio.go:462] duration metric: took 1.756571401s to copy over tarball
	I0407 13:35:26.980790 1213906 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0407 13:35:30.472394 1213906 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.49156259s)
	I0407 13:35:30.472429 1213906 crio.go:469] duration metric: took 3.491696202s to extract the tarball
	I0407 13:35:30.472439 1213906 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0407 13:35:30.516846 1213906 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 13:35:30.554250 1213906 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0407 13:35:30.554284 1213906 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0407 13:35:30.554375 1213906 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 13:35:30.554428 1213906 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 13:35:30.554443 1213906 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0407 13:35:30.554453 1213906 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0407 13:35:30.554473 1213906 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0407 13:35:30.554537 1213906 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0407 13:35:30.554544 1213906 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0407 13:35:30.554381 1213906 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0407 13:35:30.556277 1213906 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0407 13:35:30.556288 1213906 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0407 13:35:30.556279 1213906 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0407 13:35:30.556311 1213906 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 13:35:30.556287 1213906 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0407 13:35:30.556393 1213906 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 13:35:30.556292 1213906 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0407 13:35:30.556275 1213906 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0407 13:35:30.706670 1213906 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0407 13:35:30.708051 1213906 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0407 13:35:30.708565 1213906 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0407 13:35:30.716782 1213906 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0407 13:35:30.721041 1213906 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0407 13:35:30.734002 1213906 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 13:35:30.772330 1213906 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0407 13:35:30.846091 1213906 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0407 13:35:30.846152 1213906 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0407 13:35:30.846208 1213906 ssh_runner.go:195] Run: which crictl
	I0407 13:35:30.912877 1213906 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0407 13:35:30.912932 1213906 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0407 13:35:30.912977 1213906 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0407 13:35:30.913007 1213906 ssh_runner.go:195] Run: which crictl
	I0407 13:35:30.913029 1213906 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0407 13:35:30.913078 1213906 ssh_runner.go:195] Run: which crictl
	I0407 13:35:30.913099 1213906 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0407 13:35:30.913128 1213906 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0407 13:35:30.913143 1213906 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0407 13:35:30.913154 1213906 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0407 13:35:30.913189 1213906 ssh_runner.go:195] Run: which crictl
	I0407 13:35:30.913196 1213906 ssh_runner.go:195] Run: which crictl
	I0407 13:35:30.932128 1213906 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0407 13:35:30.932205 1213906 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 13:35:30.932301 1213906 ssh_runner.go:195] Run: which crictl
	I0407 13:35:30.952181 1213906 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0407 13:35:30.952250 1213906 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0407 13:35:30.952305 1213906 ssh_runner.go:195] Run: which crictl
	I0407 13:35:30.952314 1213906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0407 13:35:30.952404 1213906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0407 13:35:30.952413 1213906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0407 13:35:30.952500 1213906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0407 13:35:30.952588 1213906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0407 13:35:30.952626 1213906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 13:35:31.072931 1213906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0407 13:35:31.072975 1213906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0407 13:35:31.110440 1213906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 13:35:31.115361 1213906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0407 13:35:31.115475 1213906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0407 13:35:31.119932 1213906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0407 13:35:31.119999 1213906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0407 13:35:31.182181 1213906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0407 13:35:31.182225 1213906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0407 13:35:31.322380 1213906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 13:35:31.322404 1213906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0407 13:35:31.355192 1213906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0407 13:35:31.355303 1213906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0407 13:35:31.355335 1213906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0407 13:35:31.355358 1213906 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0407 13:35:31.355467 1213906 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0407 13:35:31.424948 1213906 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0407 13:35:31.425099 1213906 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0407 13:35:31.477939 1213906 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0407 13:35:31.478098 1213906 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0407 13:35:31.478123 1213906 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0407 13:35:31.478191 1213906 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0407 13:35:32.392582 1213906 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 13:35:32.538186 1213906 cache_images.go:92] duration metric: took 1.983875651s to LoadCachedImages
	W0407 13:35:32.538326 1213906 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0407 13:35:32.538344 1213906 kubeadm.go:934] updating node { 192.168.39.211 8443 v1.20.0 crio true true} ...
	I0407 13:35:32.538480 1213906 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-435730 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-435730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 13:35:32.538572 1213906 ssh_runner.go:195] Run: crio config
	I0407 13:35:32.585847 1213906 cni.go:84] Creating CNI manager for ""
	I0407 13:35:32.585878 1213906 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 13:35:32.585893 1213906 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0407 13:35:32.585921 1213906 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.211 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-435730 NodeName:old-k8s-version-435730 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0407 13:35:32.586092 1213906 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-435730"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.211
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.211"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 13:35:32.586166 1213906 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0407 13:35:32.597052 1213906 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 13:35:32.597142 1213906 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 13:35:32.608102 1213906 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0407 13:35:32.627492 1213906 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 13:35:32.645047 1213906 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0407 13:35:32.666176 1213906 ssh_runner.go:195] Run: grep 192.168.39.211	control-plane.minikube.internal$ /etc/hosts
	I0407 13:35:32.670758 1213906 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:35:32.684838 1213906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:35:32.823738 1213906 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:35:32.844498 1213906 certs.go:68] Setting up /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730 for IP: 192.168.39.211
	I0407 13:35:32.844533 1213906 certs.go:194] generating shared ca certs ...
	I0407 13:35:32.844703 1213906 certs.go:226] acquiring lock for ca certs: {Name:mk8e89191fca7f2111bdd08c345368f593b0d5d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:35:32.844920 1213906 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.key
	I0407 13:35:32.845024 1213906 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.key
	I0407 13:35:32.845040 1213906 certs.go:256] generating profile certs ...
	I0407 13:35:32.845189 1213906 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/client.key
	I0407 13:35:32.845268 1213906 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/apiserver.key.7d731b07
	I0407 13:35:32.845326 1213906 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/proxy-client.key
	I0407 13:35:32.845478 1213906 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716.pem (1338 bytes)
	W0407 13:35:32.845525 1213906 certs.go:480] ignoring /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716_empty.pem, impossibly tiny 0 bytes
	I0407 13:35:32.845540 1213906 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem (1675 bytes)
	I0407 13:35:32.845572 1213906 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem (1078 bytes)
	I0407 13:35:32.845601 1213906 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem (1123 bytes)
	I0407 13:35:32.845625 1213906 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem (1675 bytes)
	I0407 13:35:32.845674 1213906 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem (1708 bytes)
	I0407 13:35:32.846662 1213906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 13:35:32.892387 1213906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 13:35:32.933308 1213906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 13:35:32.965523 1213906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0407 13:35:33.009848 1213906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0407 13:35:33.041839 1213906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0407 13:35:33.075416 1213906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 13:35:33.121422 1213906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/old-k8s-version-435730/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0407 13:35:33.155754 1213906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem --> /usr/share/ca-certificates/11697162.pem (1708 bytes)
	I0407 13:35:33.186453 1213906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 13:35:33.212168 1213906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716.pem --> /usr/share/ca-certificates/1169716.pem (1338 bytes)
	I0407 13:35:33.239369 1213906 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 13:35:33.257675 1213906 ssh_runner.go:195] Run: openssl version
	I0407 13:35:33.264159 1213906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 13:35:33.276932 1213906 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:35:33.282578 1213906 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:14 /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:35:33.282648 1213906 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:35:33.289308 1213906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 13:35:33.301734 1213906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1169716.pem && ln -fs /usr/share/ca-certificates/1169716.pem /etc/ssl/certs/1169716.pem"
	I0407 13:35:33.313998 1213906 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1169716.pem
	I0407 13:35:33.319256 1213906 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 12:22 /usr/share/ca-certificates/1169716.pem
	I0407 13:35:33.319339 1213906 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1169716.pem
	I0407 13:35:33.325957 1213906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1169716.pem /etc/ssl/certs/51391683.0"
	I0407 13:35:33.338447 1213906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11697162.pem && ln -fs /usr/share/ca-certificates/11697162.pem /etc/ssl/certs/11697162.pem"
	I0407 13:35:33.351625 1213906 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11697162.pem
	I0407 13:35:33.357446 1213906 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 12:22 /usr/share/ca-certificates/11697162.pem
	I0407 13:35:33.357520 1213906 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11697162.pem
	I0407 13:35:33.364650 1213906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11697162.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 13:35:33.378102 1213906 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 13:35:33.383902 1213906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0407 13:35:33.391087 1213906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0407 13:35:33.398065 1213906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0407 13:35:33.406456 1213906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0407 13:35:33.414370 1213906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0407 13:35:33.421627 1213906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0407 13:35:33.428721 1213906 kubeadm.go:392] StartCluster: {Name:old-k8s-version-435730 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-ver
sion-435730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:35:33.428846 1213906 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0407 13:35:33.428907 1213906 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0407 13:35:33.468194 1213906 cri.go:89] found id: ""
	I0407 13:35:33.468302 1213906 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0407 13:35:33.479857 1213906 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0407 13:35:33.479882 1213906 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0407 13:35:33.479966 1213906 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0407 13:35:33.491472 1213906 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0407 13:35:33.492597 1213906 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-435730" does not appear in /home/jenkins/minikube-integration/20602-1162386/kubeconfig
	I0407 13:35:33.493697 1213906 kubeconfig.go:62] /home/jenkins/minikube-integration/20602-1162386/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-435730" cluster setting kubeconfig missing "old-k8s-version-435730" context setting]
	I0407 13:35:33.494726 1213906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/kubeconfig: {Name:mk712863958f7dbf2601dd82dc9ca7bea42ef42f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:35:33.496483 1213906 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0407 13:35:33.510471 1213906 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.211
	I0407 13:35:33.510517 1213906 kubeadm.go:1160] stopping kube-system containers ...
	I0407 13:35:33.510536 1213906 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0407 13:35:33.510618 1213906 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0407 13:35:33.549519 1213906 cri.go:89] found id: ""
	I0407 13:35:33.549617 1213906 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0407 13:35:33.567411 1213906 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 13:35:33.577965 1213906 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 13:35:33.577994 1213906 kubeadm.go:157] found existing configuration files:
	
	I0407 13:35:33.578048 1213906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 13:35:33.587736 1213906 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 13:35:33.587837 1213906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 13:35:33.598620 1213906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 13:35:33.608925 1213906 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 13:35:33.609041 1213906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 13:35:33.619997 1213906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 13:35:33.630557 1213906 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 13:35:33.630627 1213906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 13:35:33.641435 1213906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 13:35:33.651017 1213906 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 13:35:33.651086 1213906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 13:35:33.661039 1213906 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0407 13:35:33.671744 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 13:35:34.033434 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 13:35:34.947103 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0407 13:35:35.199429 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 13:35:35.304351 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0407 13:35:35.413528 1213906 api_server.go:52] waiting for apiserver process to appear ...
	I0407 13:35:35.413641 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:35.913868 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:36.414654 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:36.914519 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:37.414605 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:37.914161 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:38.413913 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:38.914603 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:39.413978 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:39.914593 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:40.414641 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:40.913734 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:41.414556 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:41.913957 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:42.413864 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:42.914715 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:43.413858 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:43.914533 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:44.414601 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:44.914020 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:45.414066 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:45.913900 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:46.414758 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:46.914158 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:47.414122 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:47.914596 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:48.414757 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:48.914575 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:49.414186 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:49.913907 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:50.414054 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:50.913924 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:51.414443 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:51.914211 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:52.413911 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:52.913887 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:53.413860 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:53.914472 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:54.413839 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:54.913911 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:55.413900 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:55.913734 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:56.414171 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:56.913992 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:57.414509 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:57.913881 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:58.413961 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:58.914112 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:59.414114 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:35:59.913865 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:00.413951 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:00.914261 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:01.414013 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:01.914154 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:02.413797 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:02.914033 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:03.414095 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:03.914716 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:04.413865 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:04.913966 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:05.413879 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:05.913886 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:06.413929 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:06.914647 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:07.414210 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:07.914492 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:08.414559 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:08.914498 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:09.413925 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:09.913759 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:10.413771 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:10.914716 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:11.414239 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:11.914634 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:12.414571 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:12.914358 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:13.414075 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:13.914094 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:14.414401 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:14.913931 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:15.414071 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:15.914766 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:16.414718 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:16.914621 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:17.414692 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:17.914179 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:18.414489 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:18.913819 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:19.414268 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:19.914291 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:20.413880 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:20.914733 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:21.413820 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:21.914576 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:22.413880 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:22.913757 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:23.413919 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:23.914741 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:24.413937 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:24.913900 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:25.413894 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:25.913951 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:26.413873 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:26.913973 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:27.413856 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:27.914784 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:28.413913 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:28.913847 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:29.413971 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:29.914697 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:30.413898 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:30.914464 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:31.414560 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:31.914498 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:32.414611 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:32.914496 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:33.414445 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:33.914792 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:34.414787 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:34.914415 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:35.414720 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:36:35.414814 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:36:35.455437 1213906 cri.go:89] found id: ""
	I0407 13:36:35.455472 1213906 logs.go:282] 0 containers: []
	W0407 13:36:35.455483 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:36:35.455492 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:36:35.455567 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:36:35.497396 1213906 cri.go:89] found id: ""
	I0407 13:36:35.497430 1213906 logs.go:282] 0 containers: []
	W0407 13:36:35.497439 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:36:35.497445 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:36:35.497507 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:36:35.541795 1213906 cri.go:89] found id: ""
	I0407 13:36:35.541827 1213906 logs.go:282] 0 containers: []
	W0407 13:36:35.541837 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:36:35.541844 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:36:35.541903 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:36:35.586060 1213906 cri.go:89] found id: ""
	I0407 13:36:35.586092 1213906 logs.go:282] 0 containers: []
	W0407 13:36:35.586103 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:36:35.586112 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:36:35.586174 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:36:35.637305 1213906 cri.go:89] found id: ""
	I0407 13:36:35.637337 1213906 logs.go:282] 0 containers: []
	W0407 13:36:35.637350 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:36:35.637358 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:36:35.637416 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:36:35.681970 1213906 cri.go:89] found id: ""
	I0407 13:36:35.681998 1213906 logs.go:282] 0 containers: []
	W0407 13:36:35.682006 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:36:35.682013 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:36:35.682067 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:36:35.724879 1213906 cri.go:89] found id: ""
	I0407 13:36:35.724915 1213906 logs.go:282] 0 containers: []
	W0407 13:36:35.724926 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:36:35.724934 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:36:35.725007 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:36:35.767125 1213906 cri.go:89] found id: ""
	I0407 13:36:35.767156 1213906 logs.go:282] 0 containers: []
	W0407 13:36:35.767165 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:36:35.767179 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:36:35.767202 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:36:35.817523 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:36:35.817561 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:36:35.876127 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:36:35.876170 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:36:35.894061 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:36:35.894102 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:36:36.072120 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:36:36.072142 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:36:36.072160 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:36:38.647619 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:38.665644 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:36:38.665756 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:36:38.718584 1213906 cri.go:89] found id: ""
	I0407 13:36:38.718622 1213906 logs.go:282] 0 containers: []
	W0407 13:36:38.718635 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:36:38.718645 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:36:38.718722 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:36:38.769686 1213906 cri.go:89] found id: ""
	I0407 13:36:38.769756 1213906 logs.go:282] 0 containers: []
	W0407 13:36:38.769769 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:36:38.769779 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:36:38.769863 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:36:38.834540 1213906 cri.go:89] found id: ""
	I0407 13:36:38.834575 1213906 logs.go:282] 0 containers: []
	W0407 13:36:38.834591 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:36:38.834599 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:36:38.834669 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:36:38.888943 1213906 cri.go:89] found id: ""
	I0407 13:36:38.888993 1213906 logs.go:282] 0 containers: []
	W0407 13:36:38.889005 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:36:38.889016 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:36:38.889118 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:36:38.929698 1213906 cri.go:89] found id: ""
	I0407 13:36:38.929773 1213906 logs.go:282] 0 containers: []
	W0407 13:36:38.929787 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:36:38.929798 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:36:38.929894 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:36:38.975092 1213906 cri.go:89] found id: ""
	I0407 13:36:38.975149 1213906 logs.go:282] 0 containers: []
	W0407 13:36:38.975302 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:36:38.975332 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:36:38.975411 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:36:39.017304 1213906 cri.go:89] found id: ""
	I0407 13:36:39.017337 1213906 logs.go:282] 0 containers: []
	W0407 13:36:39.017346 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:36:39.017353 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:36:39.017424 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:36:39.065362 1213906 cri.go:89] found id: ""
	I0407 13:36:39.065403 1213906 logs.go:282] 0 containers: []
	W0407 13:36:39.065417 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:36:39.065430 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:36:39.065449 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:36:39.123573 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:36:39.123631 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:36:39.137725 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:36:39.137778 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:36:39.229968 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:36:39.230000 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:36:39.230022 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:36:39.340329 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:36:39.340381 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:36:41.883112 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:41.896258 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:36:41.896345 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:36:41.933395 1213906 cri.go:89] found id: ""
	I0407 13:36:41.933430 1213906 logs.go:282] 0 containers: []
	W0407 13:36:41.933443 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:36:41.933450 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:36:41.933525 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:36:41.966953 1213906 cri.go:89] found id: ""
	I0407 13:36:41.966988 1213906 logs.go:282] 0 containers: []
	W0407 13:36:41.966997 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:36:41.967004 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:36:41.967068 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:36:42.002063 1213906 cri.go:89] found id: ""
	I0407 13:36:42.002098 1213906 logs.go:282] 0 containers: []
	W0407 13:36:42.002108 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:36:42.002115 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:36:42.002174 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:36:42.038043 1213906 cri.go:89] found id: ""
	I0407 13:36:42.038084 1213906 logs.go:282] 0 containers: []
	W0407 13:36:42.038096 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:36:42.038105 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:36:42.038181 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:36:42.083476 1213906 cri.go:89] found id: ""
	I0407 13:36:42.083515 1213906 logs.go:282] 0 containers: []
	W0407 13:36:42.083527 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:36:42.083537 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:36:42.083619 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:36:42.122711 1213906 cri.go:89] found id: ""
	I0407 13:36:42.122738 1213906 logs.go:282] 0 containers: []
	W0407 13:36:42.122749 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:36:42.122758 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:36:42.122819 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:36:42.169860 1213906 cri.go:89] found id: ""
	I0407 13:36:42.169899 1213906 logs.go:282] 0 containers: []
	W0407 13:36:42.169912 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:36:42.169920 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:36:42.169993 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:36:42.209097 1213906 cri.go:89] found id: ""
	I0407 13:36:42.209133 1213906 logs.go:282] 0 containers: []
	W0407 13:36:42.209145 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:36:42.209157 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:36:42.209175 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:36:42.264759 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:36:42.264805 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:36:42.281638 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:36:42.281684 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:36:42.355035 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:36:42.355069 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:36:42.355087 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:36:42.434476 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:36:42.434527 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:36:44.975434 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:44.989171 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:36:44.989262 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:36:45.037212 1213906 cri.go:89] found id: ""
	I0407 13:36:45.037254 1213906 logs.go:282] 0 containers: []
	W0407 13:36:45.037266 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:36:45.037275 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:36:45.037427 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:36:45.073773 1213906 cri.go:89] found id: ""
	I0407 13:36:45.073805 1213906 logs.go:282] 0 containers: []
	W0407 13:36:45.073814 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:36:45.073820 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:36:45.073883 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:36:45.109737 1213906 cri.go:89] found id: ""
	I0407 13:36:45.109777 1213906 logs.go:282] 0 containers: []
	W0407 13:36:45.109786 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:36:45.109792 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:36:45.109859 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:36:45.149440 1213906 cri.go:89] found id: ""
	I0407 13:36:45.149479 1213906 logs.go:282] 0 containers: []
	W0407 13:36:45.149490 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:36:45.149499 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:36:45.149567 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:36:45.186275 1213906 cri.go:89] found id: ""
	I0407 13:36:45.186309 1213906 logs.go:282] 0 containers: []
	W0407 13:36:45.186320 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:36:45.186329 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:36:45.186400 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:36:45.222923 1213906 cri.go:89] found id: ""
	I0407 13:36:45.222990 1213906 logs.go:282] 0 containers: []
	W0407 13:36:45.222999 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:36:45.223008 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:36:45.223080 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:36:45.260529 1213906 cri.go:89] found id: ""
	I0407 13:36:45.260563 1213906 logs.go:282] 0 containers: []
	W0407 13:36:45.260572 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:36:45.260583 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:36:45.260653 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:36:45.298789 1213906 cri.go:89] found id: ""
	I0407 13:36:45.298832 1213906 logs.go:282] 0 containers: []
	W0407 13:36:45.298844 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:36:45.298856 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:36:45.298879 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:36:45.372405 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:36:45.372434 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:36:45.372455 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:36:45.451151 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:36:45.451205 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:36:45.490437 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:36:45.490473 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:36:45.542295 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:36:45.542343 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:36:48.057251 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:48.080036 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:36:48.080152 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:36:48.129600 1213906 cri.go:89] found id: ""
	I0407 13:36:48.129642 1213906 logs.go:282] 0 containers: []
	W0407 13:36:48.129655 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:36:48.129664 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:36:48.129768 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:36:48.182297 1213906 cri.go:89] found id: ""
	I0407 13:36:48.182339 1213906 logs.go:282] 0 containers: []
	W0407 13:36:48.182352 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:36:48.182361 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:36:48.182437 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:36:48.237150 1213906 cri.go:89] found id: ""
	I0407 13:36:48.237186 1213906 logs.go:282] 0 containers: []
	W0407 13:36:48.237197 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:36:48.237206 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:36:48.237277 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:36:48.290844 1213906 cri.go:89] found id: ""
	I0407 13:36:48.290883 1213906 logs.go:282] 0 containers: []
	W0407 13:36:48.290906 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:36:48.290915 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:36:48.290999 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:36:48.339438 1213906 cri.go:89] found id: ""
	I0407 13:36:48.339468 1213906 logs.go:282] 0 containers: []
	W0407 13:36:48.339479 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:36:48.339487 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:36:48.339551 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:36:48.383702 1213906 cri.go:89] found id: ""
	I0407 13:36:48.383750 1213906 logs.go:282] 0 containers: []
	W0407 13:36:48.383762 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:36:48.383771 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:36:48.383838 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:36:48.443261 1213906 cri.go:89] found id: ""
	I0407 13:36:48.443297 1213906 logs.go:282] 0 containers: []
	W0407 13:36:48.443311 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:36:48.443321 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:36:48.443386 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:36:48.490033 1213906 cri.go:89] found id: ""
	I0407 13:36:48.490072 1213906 logs.go:282] 0 containers: []
	W0407 13:36:48.490085 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:36:48.490100 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:36:48.490116 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:36:48.562756 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:36:48.562811 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:36:48.581206 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:36:48.581248 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:36:48.685829 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:36:48.685861 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:36:48.685878 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:36:48.771491 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:36:48.771546 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:36:51.322661 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:51.342798 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:36:51.342895 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:36:51.393658 1213906 cri.go:89] found id: ""
	I0407 13:36:51.393694 1213906 logs.go:282] 0 containers: []
	W0407 13:36:51.393728 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:36:51.393737 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:36:51.393813 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:36:51.434321 1213906 cri.go:89] found id: ""
	I0407 13:36:51.434358 1213906 logs.go:282] 0 containers: []
	W0407 13:36:51.434370 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:36:51.434379 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:36:51.434451 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:36:51.473332 1213906 cri.go:89] found id: ""
	I0407 13:36:51.473368 1213906 logs.go:282] 0 containers: []
	W0407 13:36:51.473381 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:36:51.473398 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:36:51.473482 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:36:51.523072 1213906 cri.go:89] found id: ""
	I0407 13:36:51.523105 1213906 logs.go:282] 0 containers: []
	W0407 13:36:51.523117 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:36:51.523125 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:36:51.523192 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:36:51.579107 1213906 cri.go:89] found id: ""
	I0407 13:36:51.579156 1213906 logs.go:282] 0 containers: []
	W0407 13:36:51.579168 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:36:51.579176 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:36:51.579283 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:36:51.627599 1213906 cri.go:89] found id: ""
	I0407 13:36:51.627633 1213906 logs.go:282] 0 containers: []
	W0407 13:36:51.627643 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:36:51.627652 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:36:51.627724 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:36:51.671027 1213906 cri.go:89] found id: ""
	I0407 13:36:51.671060 1213906 logs.go:282] 0 containers: []
	W0407 13:36:51.671071 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:36:51.671079 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:36:51.671156 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:36:51.715236 1213906 cri.go:89] found id: ""
	I0407 13:36:51.715287 1213906 logs.go:282] 0 containers: []
	W0407 13:36:51.715301 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:36:51.715316 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:36:51.715334 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:36:51.800438 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:36:51.800497 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:36:51.816956 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:36:51.816999 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:36:51.894800 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:36:51.894836 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:36:51.894854 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:36:51.976287 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:36:51.976341 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:36:54.535593 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:54.553307 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:36:54.553417 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:36:54.592940 1213906 cri.go:89] found id: ""
	I0407 13:36:54.592983 1213906 logs.go:282] 0 containers: []
	W0407 13:36:54.592995 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:36:54.593003 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:36:54.593081 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:36:54.630630 1213906 cri.go:89] found id: ""
	I0407 13:36:54.630666 1213906 logs.go:282] 0 containers: []
	W0407 13:36:54.630677 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:36:54.630686 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:36:54.630764 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:36:54.675375 1213906 cri.go:89] found id: ""
	I0407 13:36:54.675413 1213906 logs.go:282] 0 containers: []
	W0407 13:36:54.675426 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:36:54.675433 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:36:54.675507 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:36:54.717352 1213906 cri.go:89] found id: ""
	I0407 13:36:54.717389 1213906 logs.go:282] 0 containers: []
	W0407 13:36:54.717400 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:36:54.717410 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:36:54.717480 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:36:54.755712 1213906 cri.go:89] found id: ""
	I0407 13:36:54.755757 1213906 logs.go:282] 0 containers: []
	W0407 13:36:54.755772 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:36:54.755781 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:36:54.755856 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:36:54.802148 1213906 cri.go:89] found id: ""
	I0407 13:36:54.802186 1213906 logs.go:282] 0 containers: []
	W0407 13:36:54.802198 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:36:54.802207 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:36:54.802309 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:36:54.849640 1213906 cri.go:89] found id: ""
	I0407 13:36:54.849683 1213906 logs.go:282] 0 containers: []
	W0407 13:36:54.849695 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:36:54.849717 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:36:54.849797 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:36:54.894319 1213906 cri.go:89] found id: ""
	I0407 13:36:54.894359 1213906 logs.go:282] 0 containers: []
	W0407 13:36:54.894372 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:36:54.894386 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:36:54.894404 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:36:54.978987 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:36:54.979042 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:36:55.023265 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:36:55.023306 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:36:55.083498 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:36:55.083554 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:36:55.102970 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:36:55.103033 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:36:55.182300 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:36:57.683352 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:36:57.698983 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:36:57.699071 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:36:57.739452 1213906 cri.go:89] found id: ""
	I0407 13:36:57.739503 1213906 logs.go:282] 0 containers: []
	W0407 13:36:57.739515 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:36:57.739524 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:36:57.739585 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:36:57.777864 1213906 cri.go:89] found id: ""
	I0407 13:36:57.777900 1213906 logs.go:282] 0 containers: []
	W0407 13:36:57.777924 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:36:57.777933 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:36:57.778006 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:36:57.814705 1213906 cri.go:89] found id: ""
	I0407 13:36:57.814742 1213906 logs.go:282] 0 containers: []
	W0407 13:36:57.814755 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:36:57.814764 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:36:57.814844 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:36:57.861584 1213906 cri.go:89] found id: ""
	I0407 13:36:57.861624 1213906 logs.go:282] 0 containers: []
	W0407 13:36:57.861636 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:36:57.861644 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:36:57.861731 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:36:57.902828 1213906 cri.go:89] found id: ""
	I0407 13:36:57.902864 1213906 logs.go:282] 0 containers: []
	W0407 13:36:57.902874 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:36:57.902882 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:36:57.902962 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:36:57.943933 1213906 cri.go:89] found id: ""
	I0407 13:36:57.943967 1213906 logs.go:282] 0 containers: []
	W0407 13:36:57.943980 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:36:57.943991 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:36:57.944068 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:36:57.986729 1213906 cri.go:89] found id: ""
	I0407 13:36:57.986759 1213906 logs.go:282] 0 containers: []
	W0407 13:36:57.986770 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:36:57.986778 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:36:57.986873 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:36:58.027919 1213906 cri.go:89] found id: ""
	I0407 13:36:58.027963 1213906 logs.go:282] 0 containers: []
	W0407 13:36:58.027976 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:36:58.027991 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:36:58.028009 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:36:58.126922 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:36:58.126989 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:36:58.174695 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:36:58.174737 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:36:58.248000 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:36:58.248048 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:36:58.264981 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:36:58.265037 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:36:58.358618 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:37:00.859548 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:37:00.872481 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:37:00.872571 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:37:00.915780 1213906 cri.go:89] found id: ""
	I0407 13:37:00.915815 1213906 logs.go:282] 0 containers: []
	W0407 13:37:00.915826 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:37:00.915835 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:37:00.915904 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:37:00.957589 1213906 cri.go:89] found id: ""
	I0407 13:37:00.957631 1213906 logs.go:282] 0 containers: []
	W0407 13:37:00.957643 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:37:00.957667 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:37:00.957761 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:37:00.993945 1213906 cri.go:89] found id: ""
	I0407 13:37:00.993981 1213906 logs.go:282] 0 containers: []
	W0407 13:37:00.994020 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:37:00.994029 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:37:00.994115 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:37:01.038447 1213906 cri.go:89] found id: ""
	I0407 13:37:01.038483 1213906 logs.go:282] 0 containers: []
	W0407 13:37:01.038494 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:37:01.038502 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:37:01.038587 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:37:01.079579 1213906 cri.go:89] found id: ""
	I0407 13:37:01.079620 1213906 logs.go:282] 0 containers: []
	W0407 13:37:01.079631 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:37:01.079640 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:37:01.079712 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:37:01.124214 1213906 cri.go:89] found id: ""
	I0407 13:37:01.124259 1213906 logs.go:282] 0 containers: []
	W0407 13:37:01.124272 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:37:01.124281 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:37:01.124361 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:37:01.164775 1213906 cri.go:89] found id: ""
	I0407 13:37:01.164813 1213906 logs.go:282] 0 containers: []
	W0407 13:37:01.164825 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:37:01.164834 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:37:01.164907 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:37:01.197940 1213906 cri.go:89] found id: ""
	I0407 13:37:01.197977 1213906 logs.go:282] 0 containers: []
	W0407 13:37:01.197989 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:37:01.198002 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:37:01.198020 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:37:01.248994 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:37:01.249041 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:37:01.266719 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:37:01.266762 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:37:01.348542 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:37:01.348579 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:37:01.348604 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:37:01.428494 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:37:01.428556 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:37:03.971987 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:37:03.987156 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:37:03.987254 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:37:04.026541 1213906 cri.go:89] found id: ""
	I0407 13:37:04.026585 1213906 logs.go:282] 0 containers: []
	W0407 13:37:04.026601 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:37:04.026612 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:37:04.026685 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:37:04.066488 1213906 cri.go:89] found id: ""
	I0407 13:37:04.066528 1213906 logs.go:282] 0 containers: []
	W0407 13:37:04.066542 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:37:04.066552 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:37:04.066627 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:37:04.110450 1213906 cri.go:89] found id: ""
	I0407 13:37:04.110483 1213906 logs.go:282] 0 containers: []
	W0407 13:37:04.110494 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:37:04.110503 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:37:04.110573 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:37:04.160243 1213906 cri.go:89] found id: ""
	I0407 13:37:04.160283 1213906 logs.go:282] 0 containers: []
	W0407 13:37:04.160300 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:37:04.160308 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:37:04.160392 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:37:04.211788 1213906 cri.go:89] found id: ""
	I0407 13:37:04.211839 1213906 logs.go:282] 0 containers: []
	W0407 13:37:04.211855 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:37:04.211865 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:37:04.212073 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:37:04.256265 1213906 cri.go:89] found id: ""
	I0407 13:37:04.256310 1213906 logs.go:282] 0 containers: []
	W0407 13:37:04.256323 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:37:04.256334 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:37:04.256430 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:37:04.298955 1213906 cri.go:89] found id: ""
	I0407 13:37:04.299017 1213906 logs.go:282] 0 containers: []
	W0407 13:37:04.299027 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:37:04.299037 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:37:04.299105 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:37:04.339632 1213906 cri.go:89] found id: ""
	I0407 13:37:04.339673 1213906 logs.go:282] 0 containers: []
	W0407 13:37:04.339683 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:37:04.339695 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:37:04.339709 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:37:04.390726 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:37:04.390772 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:37:04.445531 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:37:04.445585 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:37:04.461043 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:37:04.461082 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:37:04.535788 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:37:04.535829 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:37:04.535846 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:37:07.119067 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:37:07.135674 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:37:07.135773 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:37:07.183799 1213906 cri.go:89] found id: ""
	I0407 13:37:07.183834 1213906 logs.go:282] 0 containers: []
	W0407 13:37:07.183847 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:37:07.183856 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:37:07.183930 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:37:07.234366 1213906 cri.go:89] found id: ""
	I0407 13:37:07.234398 1213906 logs.go:282] 0 containers: []
	W0407 13:37:07.234411 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:37:07.234418 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:37:07.234475 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:37:07.281340 1213906 cri.go:89] found id: ""
	I0407 13:37:07.281381 1213906 logs.go:282] 0 containers: []
	W0407 13:37:07.281392 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:37:07.281401 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:37:07.281483 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:37:07.358627 1213906 cri.go:89] found id: ""
	I0407 13:37:07.358663 1213906 logs.go:282] 0 containers: []
	W0407 13:37:07.358675 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:37:07.358687 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:37:07.358783 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:37:07.406934 1213906 cri.go:89] found id: ""
	I0407 13:37:07.406977 1213906 logs.go:282] 0 containers: []
	W0407 13:37:07.406990 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:37:07.407025 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:37:07.407090 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:37:07.464417 1213906 cri.go:89] found id: ""
	I0407 13:37:07.464449 1213906 logs.go:282] 0 containers: []
	W0407 13:37:07.464458 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:37:07.464465 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:37:07.464522 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:37:07.513150 1213906 cri.go:89] found id: ""
	I0407 13:37:07.513187 1213906 logs.go:282] 0 containers: []
	W0407 13:37:07.513196 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:37:07.513202 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:37:07.513295 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:37:07.552552 1213906 cri.go:89] found id: ""
	I0407 13:37:07.552591 1213906 logs.go:282] 0 containers: []
	W0407 13:37:07.552602 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:37:07.552615 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:37:07.552636 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:37:07.599062 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:37:07.599102 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:37:07.663569 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:37:07.663629 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:37:07.679951 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:37:07.679995 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:37:07.764487 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:37:07.764517 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:37:07.764534 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:37:10.362281 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:37:10.385429 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:37:10.385528 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:37:10.446662 1213906 cri.go:89] found id: ""
	I0407 13:37:10.446713 1213906 logs.go:282] 0 containers: []
	W0407 13:37:10.446726 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:37:10.446735 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:37:10.446818 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:37:10.497796 1213906 cri.go:89] found id: ""
	I0407 13:37:10.497957 1213906 logs.go:282] 0 containers: []
	W0407 13:37:10.497986 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:37:10.498021 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:37:10.498139 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:37:10.540107 1213906 cri.go:89] found id: ""
	I0407 13:37:10.540142 1213906 logs.go:282] 0 containers: []
	W0407 13:37:10.540155 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:37:10.540163 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:37:10.540233 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:37:10.588488 1213906 cri.go:89] found id: ""
	I0407 13:37:10.588615 1213906 logs.go:282] 0 containers: []
	W0407 13:37:10.588636 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:37:10.588645 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:37:10.588749 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:37:10.639711 1213906 cri.go:89] found id: ""
	I0407 13:37:10.639749 1213906 logs.go:282] 0 containers: []
	W0407 13:37:10.639761 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:37:10.639770 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:37:10.639847 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:37:10.688084 1213906 cri.go:89] found id: ""
	I0407 13:37:10.688126 1213906 logs.go:282] 0 containers: []
	W0407 13:37:10.688139 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:37:10.688147 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:37:10.688337 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:37:10.734515 1213906 cri.go:89] found id: ""
	I0407 13:37:10.734537 1213906 logs.go:282] 0 containers: []
	W0407 13:37:10.734546 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:37:10.734554 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:37:10.734613 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:37:10.780718 1213906 cri.go:89] found id: ""
	I0407 13:37:10.780757 1213906 logs.go:282] 0 containers: []
	W0407 13:37:10.780769 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:37:10.780782 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:37:10.780797 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:37:10.841163 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:37:10.841288 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:37:10.900577 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:37:10.900631 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:37:10.922412 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:37:10.922465 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:37:11.030435 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:37:11.030468 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:37:11.030488 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:37:13.651617 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:37:13.667747 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:37:13.667847 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:37:13.725754 1213906 cri.go:89] found id: ""
	I0407 13:37:13.725792 1213906 logs.go:282] 0 containers: []
	W0407 13:37:13.725803 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:37:13.725812 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:37:13.725888 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:37:13.773941 1213906 cri.go:89] found id: ""
	I0407 13:37:13.773974 1213906 logs.go:282] 0 containers: []
	W0407 13:37:13.773985 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:37:13.773994 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:37:13.774064 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:37:13.812437 1213906 cri.go:89] found id: ""
	I0407 13:37:13.812472 1213906 logs.go:282] 0 containers: []
	W0407 13:37:13.812483 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:37:13.812492 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:37:13.812584 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:37:13.854200 1213906 cri.go:89] found id: ""
	I0407 13:37:13.854233 1213906 logs.go:282] 0 containers: []
	W0407 13:37:13.854241 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:37:13.854247 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:37:13.854317 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:37:13.893644 1213906 cri.go:89] found id: ""
	I0407 13:37:13.893681 1213906 logs.go:282] 0 containers: []
	W0407 13:37:13.893692 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:37:13.893700 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:37:13.893794 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:37:13.941071 1213906 cri.go:89] found id: ""
	I0407 13:37:13.941103 1213906 logs.go:282] 0 containers: []
	W0407 13:37:13.941115 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:37:13.941124 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:37:13.941194 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:37:13.982311 1213906 cri.go:89] found id: ""
	I0407 13:37:13.982343 1213906 logs.go:282] 0 containers: []
	W0407 13:37:13.982355 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:37:13.982363 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:37:13.982436 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:37:14.041624 1213906 cri.go:89] found id: ""
	I0407 13:37:14.041660 1213906 logs.go:282] 0 containers: []
	W0407 13:37:14.041671 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:37:14.041686 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:37:14.041720 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:37:14.058790 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:37:14.058837 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:37:14.170389 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:37:14.170418 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:37:14.170438 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:37:14.263570 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:37:14.263621 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:37:14.325210 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:37:14.325274 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:37:16.905885 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:37:16.923090 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:37:16.923193 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:37:16.962306 1213906 cri.go:89] found id: ""
	I0407 13:37:16.962342 1213906 logs.go:282] 0 containers: []
	W0407 13:37:16.962354 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:37:16.962363 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:37:16.962435 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:37:17.006446 1213906 cri.go:89] found id: ""
	I0407 13:37:17.006479 1213906 logs.go:282] 0 containers: []
	W0407 13:37:17.006487 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:37:17.006515 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:37:17.006584 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:37:17.054574 1213906 cri.go:89] found id: ""
	I0407 13:37:17.054615 1213906 logs.go:282] 0 containers: []
	W0407 13:37:17.054628 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:37:17.054637 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:37:17.054703 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:37:17.099544 1213906 cri.go:89] found id: ""
	I0407 13:37:17.099584 1213906 logs.go:282] 0 containers: []
	W0407 13:37:17.099596 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:37:17.099603 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:37:17.099672 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:37:17.143376 1213906 cri.go:89] found id: ""
	I0407 13:37:17.143418 1213906 logs.go:282] 0 containers: []
	W0407 13:37:17.143430 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:37:17.143438 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:37:17.143516 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:37:17.180986 1213906 cri.go:89] found id: ""
	I0407 13:37:17.181024 1213906 logs.go:282] 0 containers: []
	W0407 13:37:17.181042 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:37:17.181062 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:37:17.181135 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:37:17.223467 1213906 cri.go:89] found id: ""
	I0407 13:37:17.223502 1213906 logs.go:282] 0 containers: []
	W0407 13:37:17.223514 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:37:17.223534 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:37:17.223624 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:37:17.265924 1213906 cri.go:89] found id: ""
	I0407 13:37:17.265959 1213906 logs.go:282] 0 containers: []
	W0407 13:37:17.265972 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:37:17.265988 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:37:17.266004 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:37:17.367849 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:37:17.367874 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:37:17.367890 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:37:17.450182 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:37:17.450237 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:37:17.492548 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:37:17.492588 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:37:17.546556 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:37:17.546599 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:37:20.061901 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:37:20.076717 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:37:20.076849 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:37:20.118717 1213906 cri.go:89] found id: ""
	I0407 13:37:20.118750 1213906 logs.go:282] 0 containers: []
	W0407 13:37:20.118758 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:37:20.118764 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:37:20.118827 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:37:20.158853 1213906 cri.go:89] found id: ""
	I0407 13:37:20.158890 1213906 logs.go:282] 0 containers: []
	W0407 13:37:20.158902 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:37:20.158911 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:37:20.158976 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:37:20.200034 1213906 cri.go:89] found id: ""
	I0407 13:37:20.200072 1213906 logs.go:282] 0 containers: []
	W0407 13:37:20.200083 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:37:20.200091 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:37:20.200167 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:37:20.241170 1213906 cri.go:89] found id: ""
	I0407 13:37:20.241210 1213906 logs.go:282] 0 containers: []
	W0407 13:37:20.241223 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:37:20.241231 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:37:20.241307 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:37:20.286868 1213906 cri.go:89] found id: ""
	I0407 13:37:20.286905 1213906 logs.go:282] 0 containers: []
	W0407 13:37:20.286916 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:37:20.286925 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:37:20.287008 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:37:20.324372 1213906 cri.go:89] found id: ""
	I0407 13:37:20.324405 1213906 logs.go:282] 0 containers: []
	W0407 13:37:20.324416 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:37:20.324424 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:37:20.324500 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:37:20.369602 1213906 cri.go:89] found id: ""
	I0407 13:37:20.369637 1213906 logs.go:282] 0 containers: []
	W0407 13:37:20.369649 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:37:20.369658 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:37:20.369770 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:37:20.413544 1213906 cri.go:89] found id: ""
	I0407 13:37:20.413579 1213906 logs.go:282] 0 containers: []
	W0407 13:37:20.413591 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:37:20.413609 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:37:20.413632 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:37:20.463601 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:37:20.463638 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:37:20.514515 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:37:20.514561 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:37:20.528625 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:37:20.528668 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:37:20.595945 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:37:20.595981 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:37:20.596000 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:37:23.181624 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:37:23.200771 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:37:23.200866 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:37:23.243045 1213906 cri.go:89] found id: ""
	I0407 13:37:23.243074 1213906 logs.go:282] 0 containers: []
	W0407 13:37:23.243083 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:37:23.243091 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:37:23.243161 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:37:23.287542 1213906 cri.go:89] found id: ""
	I0407 13:37:23.287584 1213906 logs.go:282] 0 containers: []
	W0407 13:37:23.287597 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:37:23.287606 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:37:23.287683 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:37:23.332821 1213906 cri.go:89] found id: ""
	I0407 13:37:23.332858 1213906 logs.go:282] 0 containers: []
	W0407 13:37:23.332869 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:37:23.332877 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:37:23.332959 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:37:23.376171 1213906 cri.go:89] found id: ""
	I0407 13:37:23.376401 1213906 logs.go:282] 0 containers: []
	W0407 13:37:23.376436 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:37:23.376458 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:37:23.376544 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:37:23.423337 1213906 cri.go:89] found id: ""
	I0407 13:37:23.423373 1213906 logs.go:282] 0 containers: []
	W0407 13:37:23.423385 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:37:23.423394 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:37:23.423467 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:37:23.471925 1213906 cri.go:89] found id: ""
	I0407 13:37:23.471967 1213906 logs.go:282] 0 containers: []
	W0407 13:37:23.471981 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:37:23.471998 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:37:23.472078 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:37:23.513983 1213906 cri.go:89] found id: ""
	I0407 13:37:23.514021 1213906 logs.go:282] 0 containers: []
	W0407 13:37:23.514032 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:37:23.514040 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:37:23.514106 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:37:23.563185 1213906 cri.go:89] found id: ""
	I0407 13:37:23.563225 1213906 logs.go:282] 0 containers: []
	W0407 13:37:23.563237 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:37:23.563252 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:37:23.563277 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:37:23.614420 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:37:23.614461 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:37:23.692604 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:37:23.692666 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:37:23.708571 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:37:23.708618 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:37:23.812693 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:37:23.812722 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:37:23.812739 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:37:26.436764 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:37:26.454368 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:37:26.454460 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:37:26.497115 1213906 cri.go:89] found id: ""
	I0407 13:37:26.497154 1213906 logs.go:282] 0 containers: []
	W0407 13:37:26.497173 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:37:26.497183 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:37:26.497264 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:37:26.541609 1213906 cri.go:89] found id: ""
	I0407 13:37:26.541649 1213906 logs.go:282] 0 containers: []
	W0407 13:37:26.541662 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:37:26.541671 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:37:26.541765 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:37:26.590546 1213906 cri.go:89] found id: ""
	I0407 13:37:26.590592 1213906 logs.go:282] 0 containers: []
	W0407 13:37:26.590612 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:37:26.590621 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:37:26.590693 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:37:26.630833 1213906 cri.go:89] found id: ""
	I0407 13:37:26.630874 1213906 logs.go:282] 0 containers: []
	W0407 13:37:26.630887 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:37:26.630895 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:37:26.630976 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:37:26.671897 1213906 cri.go:89] found id: ""
	I0407 13:37:26.671947 1213906 logs.go:282] 0 containers: []
	W0407 13:37:26.671965 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:37:26.671976 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:37:26.672064 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:37:26.711474 1213906 cri.go:89] found id: ""
	I0407 13:37:26.711504 1213906 logs.go:282] 0 containers: []
	W0407 13:37:26.711513 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:37:26.711520 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:37:26.711575 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:37:26.753676 1213906 cri.go:89] found id: ""
	I0407 13:37:26.753740 1213906 logs.go:282] 0 containers: []
	W0407 13:37:26.753754 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:37:26.753762 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:37:26.753822 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:37:26.788827 1213906 cri.go:89] found id: ""
	I0407 13:37:26.788863 1213906 logs.go:282] 0 containers: []
	W0407 13:37:26.788872 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:37:26.788883 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:37:26.788898 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:37:26.862394 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:37:26.862426 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:37:26.862444 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:37:26.953840 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:37:26.953906 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:37:26.997698 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:37:26.997805 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:37:27.049335 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:37:27.049384 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:37:29.563383 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:37:29.577622 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:37:29.577742 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:37:29.618387 1213906 cri.go:89] found id: ""
	I0407 13:37:29.618429 1213906 logs.go:282] 0 containers: []
	W0407 13:37:29.618441 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:37:29.618451 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:37:29.618525 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:37:29.658565 1213906 cri.go:89] found id: ""
	I0407 13:37:29.658607 1213906 logs.go:282] 0 containers: []
	W0407 13:37:29.658620 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:37:29.658629 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:37:29.658707 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:37:29.702345 1213906 cri.go:89] found id: ""
	I0407 13:37:29.702385 1213906 logs.go:282] 0 containers: []
	W0407 13:37:29.702397 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:37:29.702406 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:37:29.702478 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:37:29.744154 1213906 cri.go:89] found id: ""
	I0407 13:37:29.744192 1213906 logs.go:282] 0 containers: []
	W0407 13:37:29.744203 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:37:29.744210 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:37:29.744276 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:37:29.783407 1213906 cri.go:89] found id: ""
	I0407 13:37:29.783440 1213906 logs.go:282] 0 containers: []
	W0407 13:37:29.783451 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:37:29.783459 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:37:29.783540 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:37:29.822056 1213906 cri.go:89] found id: ""
	I0407 13:37:29.822093 1213906 logs.go:282] 0 containers: []
	W0407 13:37:29.822105 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:37:29.822114 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:37:29.822406 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:37:29.862776 1213906 cri.go:89] found id: ""
	I0407 13:37:29.862826 1213906 logs.go:282] 0 containers: []
	W0407 13:37:29.862839 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:37:29.862848 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:37:29.862928 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:37:29.899634 1213906 cri.go:89] found id: ""
	I0407 13:37:29.899670 1213906 logs.go:282] 0 containers: []
	W0407 13:37:29.899678 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:37:29.899691 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:37:29.899704 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:37:29.916025 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:37:29.916077 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:37:29.996329 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:37:29.996365 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:37:29.996385 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:37:30.080827 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:37:30.080877 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:37:30.125545 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:37:30.125583 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:37:32.679967 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:37:32.693352 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:37:32.693424 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:37:32.727931 1213906 cri.go:89] found id: ""
	I0407 13:37:32.727969 1213906 logs.go:282] 0 containers: []
	W0407 13:37:32.727983 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:37:32.727992 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:37:32.728061 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:37:32.762336 1213906 cri.go:89] found id: ""
	I0407 13:37:32.762371 1213906 logs.go:282] 0 containers: []
	W0407 13:37:32.762383 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:37:32.762394 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:37:32.762464 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:37:32.801549 1213906 cri.go:89] found id: ""
	I0407 13:37:32.801579 1213906 logs.go:282] 0 containers: []
	W0407 13:37:32.801588 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:37:32.801595 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:37:32.801670 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:37:32.839740 1213906 cri.go:89] found id: ""
	I0407 13:37:32.839777 1213906 logs.go:282] 0 containers: []
	W0407 13:37:32.839790 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:37:32.839799 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:37:32.839870 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:37:32.879430 1213906 cri.go:89] found id: ""
	I0407 13:37:32.879471 1213906 logs.go:282] 0 containers: []
	W0407 13:37:32.879483 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:37:32.879491 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:37:32.879571 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:37:32.916470 1213906 cri.go:89] found id: ""
	I0407 13:37:32.916512 1213906 logs.go:282] 0 containers: []
	W0407 13:37:32.916523 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:37:32.916530 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:37:32.916597 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:37:32.950873 1213906 cri.go:89] found id: ""
	I0407 13:37:32.950907 1213906 logs.go:282] 0 containers: []
	W0407 13:37:32.950917 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:37:32.950923 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:37:32.950986 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:37:32.988791 1213906 cri.go:89] found id: ""
	I0407 13:37:32.988829 1213906 logs.go:282] 0 containers: []
	W0407 13:37:32.988842 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:37:32.988855 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:37:32.988874 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:37:33.070194 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:37:33.070242 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:37:33.112739 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:37:33.112777 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:37:33.167526 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:37:33.167578 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:37:33.182487 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:37:33.182528 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:37:33.259094 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:37:35.760129 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:37:35.778389 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:37:35.778473 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:37:35.818897 1213906 cri.go:89] found id: ""
	I0407 13:37:35.818935 1213906 logs.go:282] 0 containers: []
	W0407 13:37:35.818946 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:37:35.818958 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:37:35.819041 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:37:35.860856 1213906 cri.go:89] found id: ""
	I0407 13:37:35.860894 1213906 logs.go:282] 0 containers: []
	W0407 13:37:35.860906 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:37:35.860914 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:37:35.861005 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:37:35.903334 1213906 cri.go:89] found id: ""
	I0407 13:37:35.903375 1213906 logs.go:282] 0 containers: []
	W0407 13:37:35.903384 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:37:35.903390 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:37:35.903456 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:37:35.949094 1213906 cri.go:89] found id: ""
	I0407 13:37:35.949141 1213906 logs.go:282] 0 containers: []
	W0407 13:37:35.949154 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:37:35.949162 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:37:35.949260 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:37:35.997656 1213906 cri.go:89] found id: ""
	I0407 13:37:35.997819 1213906 logs.go:282] 0 containers: []
	W0407 13:37:35.997857 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:37:35.997881 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:37:35.998089 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:37:36.044454 1213906 cri.go:89] found id: ""
	I0407 13:37:36.044485 1213906 logs.go:282] 0 containers: []
	W0407 13:37:36.044494 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:37:36.044500 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:37:36.044560 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:37:36.086640 1213906 cri.go:89] found id: ""
	I0407 13:37:36.086670 1213906 logs.go:282] 0 containers: []
	W0407 13:37:36.086693 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:37:36.086699 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:37:36.086772 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:37:36.133581 1213906 cri.go:89] found id: ""
	I0407 13:37:36.133612 1213906 logs.go:282] 0 containers: []
	W0407 13:37:36.133623 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:37:36.133637 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:37:36.133655 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:37:36.199664 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:37:36.199738 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:37:36.217987 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:37:36.218085 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:37:36.303673 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:37:36.303706 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:37:36.303722 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:37:36.384080 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:37:36.384135 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:37:38.936982 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:37:38.950147 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:37:38.950249 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:37:38.990717 1213906 cri.go:89] found id: ""
	I0407 13:37:38.990751 1213906 logs.go:282] 0 containers: []
	W0407 13:37:38.990763 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:37:38.990771 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:37:38.990845 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:37:39.037053 1213906 cri.go:89] found id: ""
	I0407 13:37:39.037087 1213906 logs.go:282] 0 containers: []
	W0407 13:37:39.037096 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:37:39.037102 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:37:39.037173 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:37:39.075719 1213906 cri.go:89] found id: ""
	I0407 13:37:39.075755 1213906 logs.go:282] 0 containers: []
	W0407 13:37:39.075768 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:37:39.075776 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:37:39.075873 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:37:39.112332 1213906 cri.go:89] found id: ""
	I0407 13:37:39.112367 1213906 logs.go:282] 0 containers: []
	W0407 13:37:39.112379 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:37:39.112388 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:37:39.112461 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:37:39.157617 1213906 cri.go:89] found id: ""
	I0407 13:37:39.157659 1213906 logs.go:282] 0 containers: []
	W0407 13:37:39.157672 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:37:39.157680 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:37:39.157783 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:37:39.199857 1213906 cri.go:89] found id: ""
	I0407 13:37:39.199892 1213906 logs.go:282] 0 containers: []
	W0407 13:37:39.199904 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:37:39.199913 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:37:39.199993 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:37:39.248033 1213906 cri.go:89] found id: ""
	I0407 13:37:39.248065 1213906 logs.go:282] 0 containers: []
	W0407 13:37:39.248074 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:37:39.248079 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:37:39.248141 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:37:39.286491 1213906 cri.go:89] found id: ""
	I0407 13:37:39.286536 1213906 logs.go:282] 0 containers: []
	W0407 13:37:39.286551 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:37:39.286567 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:37:39.286584 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:37:39.342759 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:37:39.342811 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:37:39.363068 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:37:39.363117 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:37:39.443647 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:37:39.443677 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:37:39.443693 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:37:39.527201 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:37:39.527284 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:37:42.075946 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:37:42.092209 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:37:42.092293 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:37:42.131878 1213906 cri.go:89] found id: ""
	I0407 13:37:42.131914 1213906 logs.go:282] 0 containers: []
	W0407 13:37:42.131925 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:37:42.131936 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:37:42.132010 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:37:42.179756 1213906 cri.go:89] found id: ""
	I0407 13:37:42.179787 1213906 logs.go:282] 0 containers: []
	W0407 13:37:42.179800 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:37:42.179807 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:37:42.179864 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:37:42.226138 1213906 cri.go:89] found id: ""
	I0407 13:37:42.226274 1213906 logs.go:282] 0 containers: []
	W0407 13:37:42.226295 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:37:42.226306 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:37:42.226381 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:37:42.271143 1213906 cri.go:89] found id: ""
	I0407 13:37:42.271179 1213906 logs.go:282] 0 containers: []
	W0407 13:37:42.271188 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:37:42.271196 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:37:42.271277 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:37:42.322045 1213906 cri.go:89] found id: ""
	I0407 13:37:42.322079 1213906 logs.go:282] 0 containers: []
	W0407 13:37:42.322091 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:37:42.322098 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:37:42.322178 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:37:42.387887 1213906 cri.go:89] found id: ""
	I0407 13:37:42.387930 1213906 logs.go:282] 0 containers: []
	W0407 13:37:42.387944 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:37:42.387952 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:37:42.388068 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:37:42.455080 1213906 cri.go:89] found id: ""
	I0407 13:37:42.455118 1213906 logs.go:282] 0 containers: []
	W0407 13:37:42.455130 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:37:42.455144 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:37:42.455222 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:37:42.513197 1213906 cri.go:89] found id: ""
	I0407 13:37:42.513237 1213906 logs.go:282] 0 containers: []
	W0407 13:37:42.513253 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:37:42.513265 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:37:42.513280 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:37:42.530518 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:37:42.530557 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:37:42.614352 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:37:42.614386 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:37:42.614405 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:37:42.704469 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:37:42.704525 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:37:42.757169 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:37:42.757228 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:37:45.320130 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:37:45.337324 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:37:45.337398 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:37:45.381574 1213906 cri.go:89] found id: ""
	I0407 13:37:45.381611 1213906 logs.go:282] 0 containers: []
	W0407 13:37:45.381620 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:37:45.381627 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:37:45.381694 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:37:45.427449 1213906 cri.go:89] found id: ""
	I0407 13:37:45.427493 1213906 logs.go:282] 0 containers: []
	W0407 13:37:45.427506 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:37:45.427514 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:37:45.427605 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:37:45.470477 1213906 cri.go:89] found id: ""
	I0407 13:37:45.470519 1213906 logs.go:282] 0 containers: []
	W0407 13:37:45.470531 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:37:45.470542 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:37:45.470630 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:37:45.514948 1213906 cri.go:89] found id: ""
	I0407 13:37:45.514985 1213906 logs.go:282] 0 containers: []
	W0407 13:37:45.514998 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:37:45.515007 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:37:45.515109 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:37:45.557193 1213906 cri.go:89] found id: ""
	I0407 13:37:45.557226 1213906 logs.go:282] 0 containers: []
	W0407 13:37:45.557237 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:37:45.557247 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:37:45.557320 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:37:45.596880 1213906 cri.go:89] found id: ""
	I0407 13:37:45.596908 1213906 logs.go:282] 0 containers: []
	W0407 13:37:45.596917 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:37:45.596923 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:37:45.596981 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:37:45.638628 1213906 cri.go:89] found id: ""
	I0407 13:37:45.638689 1213906 logs.go:282] 0 containers: []
	W0407 13:37:45.638704 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:37:45.638714 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:37:45.638812 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:37:45.680539 1213906 cri.go:89] found id: ""
	I0407 13:37:45.680582 1213906 logs.go:282] 0 containers: []
	W0407 13:37:45.680593 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:37:45.680608 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:37:45.680625 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:37:45.725120 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:37:45.725169 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:37:45.784411 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:37:45.784465 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:37:45.801563 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:37:45.801616 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:37:45.873876 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:37:45.873905 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:37:45.873923 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:37:48.458984 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:37:48.475079 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:37:48.475195 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:37:48.521434 1213906 cri.go:89] found id: ""
	I0407 13:37:48.521474 1213906 logs.go:282] 0 containers: []
	W0407 13:37:48.521486 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:37:48.521495 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:37:48.521562 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:37:48.566160 1213906 cri.go:89] found id: ""
	I0407 13:37:48.566201 1213906 logs.go:282] 0 containers: []
	W0407 13:37:48.566213 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:37:48.566222 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:37:48.566306 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:37:48.604858 1213906 cri.go:89] found id: ""
	I0407 13:37:48.604900 1213906 logs.go:282] 0 containers: []
	W0407 13:37:48.604912 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:37:48.604922 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:37:48.605008 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:37:48.645417 1213906 cri.go:89] found id: ""
	I0407 13:37:48.645452 1213906 logs.go:282] 0 containers: []
	W0407 13:37:48.645461 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:37:48.645468 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:37:48.645542 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:37:48.683076 1213906 cri.go:89] found id: ""
	I0407 13:37:48.683120 1213906 logs.go:282] 0 containers: []
	W0407 13:37:48.683132 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:37:48.683141 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:37:48.683243 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:37:48.723847 1213906 cri.go:89] found id: ""
	I0407 13:37:48.723879 1213906 logs.go:282] 0 containers: []
	W0407 13:37:48.723888 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:37:48.723894 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:37:48.723951 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:37:48.769377 1213906 cri.go:89] found id: ""
	I0407 13:37:48.769417 1213906 logs.go:282] 0 containers: []
	W0407 13:37:48.769428 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:37:48.769437 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:37:48.769514 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:37:48.807717 1213906 cri.go:89] found id: ""
	I0407 13:37:48.807745 1213906 logs.go:282] 0 containers: []
	W0407 13:37:48.807753 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:37:48.807762 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:37:48.807775 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:37:48.822202 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:37:48.822246 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:37:48.897398 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:37:48.897430 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:37:48.897451 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:37:48.997943 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:37:48.997998 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:37:49.064518 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:37:49.064574 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:37:51.646714 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:37:51.661027 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:37:51.661116 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:37:51.696684 1213906 cri.go:89] found id: ""
	I0407 13:37:51.696722 1213906 logs.go:282] 0 containers: []
	W0407 13:37:51.696733 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:37:51.696741 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:37:51.696826 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:37:51.735993 1213906 cri.go:89] found id: ""
	I0407 13:37:51.736024 1213906 logs.go:282] 0 containers: []
	W0407 13:37:51.736034 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:37:51.736042 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:37:51.736110 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:37:51.780664 1213906 cri.go:89] found id: ""
	I0407 13:37:51.780698 1213906 logs.go:282] 0 containers: []
	W0407 13:37:51.780709 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:37:51.780717 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:37:51.780795 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:37:51.828460 1213906 cri.go:89] found id: ""
	I0407 13:37:51.828546 1213906 logs.go:282] 0 containers: []
	W0407 13:37:51.828563 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:37:51.828573 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:37:51.828657 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:37:51.876224 1213906 cri.go:89] found id: ""
	I0407 13:37:51.876345 1213906 logs.go:282] 0 containers: []
	W0407 13:37:51.876362 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:37:51.876370 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:37:51.876451 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:37:51.920035 1213906 cri.go:89] found id: ""
	I0407 13:37:51.920071 1213906 logs.go:282] 0 containers: []
	W0407 13:37:51.920083 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:37:51.920091 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:37:51.920155 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:37:51.960724 1213906 cri.go:89] found id: ""
	I0407 13:37:51.960766 1213906 logs.go:282] 0 containers: []
	W0407 13:37:51.960776 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:37:51.960788 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:37:51.960894 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:37:52.000461 1213906 cri.go:89] found id: ""
	I0407 13:37:52.000500 1213906 logs.go:282] 0 containers: []
	W0407 13:37:52.000513 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:37:52.000527 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:37:52.000544 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:37:52.056347 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:37:52.056389 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:37:52.071090 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:37:52.071132 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:37:52.152900 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:37:52.152933 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:37:52.152950 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:37:52.261721 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:37:52.261771 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:37:54.814662 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:37:54.835951 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:37:54.836049 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:37:54.893315 1213906 cri.go:89] found id: ""
	I0407 13:37:54.893345 1213906 logs.go:282] 0 containers: []
	W0407 13:37:54.893358 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:37:54.893367 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:37:54.893441 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:37:54.945660 1213906 cri.go:89] found id: ""
	I0407 13:37:54.945741 1213906 logs.go:282] 0 containers: []
	W0407 13:37:54.945757 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:37:54.945766 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:37:54.945842 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:37:55.004951 1213906 cri.go:89] found id: ""
	I0407 13:37:55.004992 1213906 logs.go:282] 0 containers: []
	W0407 13:37:55.005008 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:37:55.005025 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:37:55.005103 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:37:55.055822 1213906 cri.go:89] found id: ""
	I0407 13:37:55.055862 1213906 logs.go:282] 0 containers: []
	W0407 13:37:55.055873 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:37:55.055883 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:37:55.055953 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:37:55.105025 1213906 cri.go:89] found id: ""
	I0407 13:37:55.105056 1213906 logs.go:282] 0 containers: []
	W0407 13:37:55.105068 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:37:55.105076 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:37:55.105151 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:37:55.144717 1213906 cri.go:89] found id: ""
	I0407 13:37:55.144750 1213906 logs.go:282] 0 containers: []
	W0407 13:37:55.144760 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:37:55.144769 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:37:55.144843 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:37:55.185820 1213906 cri.go:89] found id: ""
	I0407 13:37:55.185858 1213906 logs.go:282] 0 containers: []
	W0407 13:37:55.185870 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:37:55.185879 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:37:55.185951 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:37:55.228493 1213906 cri.go:89] found id: ""
	I0407 13:37:55.228534 1213906 logs.go:282] 0 containers: []
	W0407 13:37:55.228546 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:37:55.228563 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:37:55.228579 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:37:55.274406 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:37:55.274450 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:37:55.331293 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:37:55.331347 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:37:55.347062 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:37:55.347101 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:37:55.422219 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:37:55.422436 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:37:55.422462 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:37:58.003146 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:37:58.016514 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:37:58.016608 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:37:58.057664 1213906 cri.go:89] found id: ""
	I0407 13:37:58.057723 1213906 logs.go:282] 0 containers: []
	W0407 13:37:58.057737 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:37:58.057747 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:37:58.057828 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:37:58.098836 1213906 cri.go:89] found id: ""
	I0407 13:37:58.098866 1213906 logs.go:282] 0 containers: []
	W0407 13:37:58.098876 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:37:58.098884 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:37:58.098946 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:37:58.140431 1213906 cri.go:89] found id: ""
	I0407 13:37:58.140466 1213906 logs.go:282] 0 containers: []
	W0407 13:37:58.140476 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:37:58.140484 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:37:58.140556 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:37:58.179777 1213906 cri.go:89] found id: ""
	I0407 13:37:58.179812 1213906 logs.go:282] 0 containers: []
	W0407 13:37:58.179824 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:37:58.179833 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:37:58.179908 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:37:58.222681 1213906 cri.go:89] found id: ""
	I0407 13:37:58.222717 1213906 logs.go:282] 0 containers: []
	W0407 13:37:58.222729 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:37:58.222738 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:37:58.222810 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:37:58.278504 1213906 cri.go:89] found id: ""
	I0407 13:37:58.278616 1213906 logs.go:282] 0 containers: []
	W0407 13:37:58.278783 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:37:58.278819 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:37:58.278969 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:37:58.328703 1213906 cri.go:89] found id: ""
	I0407 13:37:58.328745 1213906 logs.go:282] 0 containers: []
	W0407 13:37:58.328756 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:37:58.328766 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:37:58.328848 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:37:58.387239 1213906 cri.go:89] found id: ""
	I0407 13:37:58.387278 1213906 logs.go:282] 0 containers: []
	W0407 13:37:58.387290 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:37:58.387308 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:37:58.387326 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:37:58.493512 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:37:58.493545 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:37:58.493563 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:37:58.581027 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:37:58.581076 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:37:58.631172 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:37:58.631225 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:37:58.709105 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:37:58.709180 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:38:01.227746 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:38:01.241290 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:38:01.241375 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:38:01.283619 1213906 cri.go:89] found id: ""
	I0407 13:38:01.283661 1213906 logs.go:282] 0 containers: []
	W0407 13:38:01.283674 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:38:01.283684 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:38:01.283769 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:38:01.328261 1213906 cri.go:89] found id: ""
	I0407 13:38:01.328295 1213906 logs.go:282] 0 containers: []
	W0407 13:38:01.328307 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:38:01.328316 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:38:01.328384 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:38:01.370621 1213906 cri.go:89] found id: ""
	I0407 13:38:01.370669 1213906 logs.go:282] 0 containers: []
	W0407 13:38:01.370678 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:38:01.370684 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:38:01.370774 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:38:01.410115 1213906 cri.go:89] found id: ""
	I0407 13:38:01.410145 1213906 logs.go:282] 0 containers: []
	W0407 13:38:01.410154 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:38:01.410161 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:38:01.410221 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:38:01.450052 1213906 cri.go:89] found id: ""
	I0407 13:38:01.450095 1213906 logs.go:282] 0 containers: []
	W0407 13:38:01.450106 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:38:01.450116 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:38:01.450202 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:38:01.490649 1213906 cri.go:89] found id: ""
	I0407 13:38:01.490681 1213906 logs.go:282] 0 containers: []
	W0407 13:38:01.490690 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:38:01.490696 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:38:01.490764 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:38:01.531209 1213906 cri.go:89] found id: ""
	I0407 13:38:01.531253 1213906 logs.go:282] 0 containers: []
	W0407 13:38:01.531263 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:38:01.531270 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:38:01.531339 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:38:01.571286 1213906 cri.go:89] found id: ""
	I0407 13:38:01.571319 1213906 logs.go:282] 0 containers: []
	W0407 13:38:01.571338 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:38:01.571354 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:38:01.571372 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:38:01.658858 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:38:01.658886 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:38:01.658902 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:38:01.750039 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:38:01.750095 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:38:01.794547 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:38:01.794592 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:38:01.852465 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:38:01.852521 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:38:04.370488 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:38:04.388942 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:38:04.389072 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:38:04.433570 1213906 cri.go:89] found id: ""
	I0407 13:38:04.433615 1213906 logs.go:282] 0 containers: []
	W0407 13:38:04.433627 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:38:04.433648 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:38:04.433760 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:38:04.473587 1213906 cri.go:89] found id: ""
	I0407 13:38:04.473619 1213906 logs.go:282] 0 containers: []
	W0407 13:38:04.473631 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:38:04.473639 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:38:04.473734 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:38:04.523877 1213906 cri.go:89] found id: ""
	I0407 13:38:04.523947 1213906 logs.go:282] 0 containers: []
	W0407 13:38:04.523963 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:38:04.523973 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:38:04.524069 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:38:04.569763 1213906 cri.go:89] found id: ""
	I0407 13:38:04.569904 1213906 logs.go:282] 0 containers: []
	W0407 13:38:04.569927 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:38:04.569942 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:38:04.570064 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:38:04.609559 1213906 cri.go:89] found id: ""
	I0407 13:38:04.609607 1213906 logs.go:282] 0 containers: []
	W0407 13:38:04.609621 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:38:04.609630 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:38:04.609758 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:38:04.645445 1213906 cri.go:89] found id: ""
	I0407 13:38:04.645482 1213906 logs.go:282] 0 containers: []
	W0407 13:38:04.645496 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:38:04.645505 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:38:04.645567 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:38:04.685013 1213906 cri.go:89] found id: ""
	I0407 13:38:04.685053 1213906 logs.go:282] 0 containers: []
	W0407 13:38:04.685064 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:38:04.685070 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:38:04.685144 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:38:04.726975 1213906 cri.go:89] found id: ""
	I0407 13:38:04.727031 1213906 logs.go:282] 0 containers: []
	W0407 13:38:04.727046 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:38:04.727061 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:38:04.727082 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:38:04.805183 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:38:04.805245 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:38:04.848372 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:38:04.848450 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:38:04.909118 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:38:04.909164 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:38:04.925886 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:38:04.925945 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:38:05.004597 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:38:07.506260 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:38:07.523181 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:38:07.523277 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:38:07.571392 1213906 cri.go:89] found id: ""
	I0407 13:38:07.571431 1213906 logs.go:282] 0 containers: []
	W0407 13:38:07.571440 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:38:07.571446 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:38:07.571512 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:38:07.614849 1213906 cri.go:89] found id: ""
	I0407 13:38:07.614893 1213906 logs.go:282] 0 containers: []
	W0407 13:38:07.614907 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:38:07.614915 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:38:07.614995 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:38:07.654109 1213906 cri.go:89] found id: ""
	I0407 13:38:07.654145 1213906 logs.go:282] 0 containers: []
	W0407 13:38:07.654157 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:38:07.654166 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:38:07.654239 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:38:07.693120 1213906 cri.go:89] found id: ""
	I0407 13:38:07.693156 1213906 logs.go:282] 0 containers: []
	W0407 13:38:07.693167 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:38:07.693176 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:38:07.693290 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:38:07.734015 1213906 cri.go:89] found id: ""
	I0407 13:38:07.734049 1213906 logs.go:282] 0 containers: []
	W0407 13:38:07.734061 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:38:07.734070 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:38:07.734170 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:38:07.775027 1213906 cri.go:89] found id: ""
	I0407 13:38:07.775073 1213906 logs.go:282] 0 containers: []
	W0407 13:38:07.775085 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:38:07.775095 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:38:07.775173 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:38:07.820494 1213906 cri.go:89] found id: ""
	I0407 13:38:07.820534 1213906 logs.go:282] 0 containers: []
	W0407 13:38:07.820543 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:38:07.820550 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:38:07.820610 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:38:07.860461 1213906 cri.go:89] found id: ""
	I0407 13:38:07.860497 1213906 logs.go:282] 0 containers: []
	W0407 13:38:07.860507 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:38:07.860517 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:38:07.860532 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:38:07.919082 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:38:07.919135 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:38:07.936538 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:38:07.936579 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:38:08.019264 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:38:08.019289 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:38:08.019312 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:38:08.110701 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:38:08.110761 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:38:10.659314 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:38:10.673843 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:38:10.673936 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:38:10.717184 1213906 cri.go:89] found id: ""
	I0407 13:38:10.717227 1213906 logs.go:282] 0 containers: []
	W0407 13:38:10.717239 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:38:10.717250 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:38:10.717338 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:38:10.758890 1213906 cri.go:89] found id: ""
	I0407 13:38:10.758926 1213906 logs.go:282] 0 containers: []
	W0407 13:38:10.758936 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:38:10.758942 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:38:10.759013 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:38:10.797928 1213906 cri.go:89] found id: ""
	I0407 13:38:10.797965 1213906 logs.go:282] 0 containers: []
	W0407 13:38:10.797974 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:38:10.797982 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:38:10.798064 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:38:10.842052 1213906 cri.go:89] found id: ""
	I0407 13:38:10.842100 1213906 logs.go:282] 0 containers: []
	W0407 13:38:10.842112 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:38:10.842121 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:38:10.842195 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:38:10.881471 1213906 cri.go:89] found id: ""
	I0407 13:38:10.881501 1213906 logs.go:282] 0 containers: []
	W0407 13:38:10.881510 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:38:10.881516 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:38:10.881586 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:38:10.923335 1213906 cri.go:89] found id: ""
	I0407 13:38:10.923367 1213906 logs.go:282] 0 containers: []
	W0407 13:38:10.923380 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:38:10.923389 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:38:10.923466 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:38:10.965046 1213906 cri.go:89] found id: ""
	I0407 13:38:10.965087 1213906 logs.go:282] 0 containers: []
	W0407 13:38:10.965096 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:38:10.965102 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:38:10.965163 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:38:11.009644 1213906 cri.go:89] found id: ""
	I0407 13:38:11.009682 1213906 logs.go:282] 0 containers: []
	W0407 13:38:11.009694 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:38:11.009720 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:38:11.009737 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:38:11.086314 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:38:11.086337 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:38:11.086352 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:38:11.179346 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:38:11.179391 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:38:11.227459 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:38:11.227505 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:38:11.289409 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:38:11.289456 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:38:13.804758 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:38:13.818792 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:38:13.818873 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:38:13.855066 1213906 cri.go:89] found id: ""
	I0407 13:38:13.855101 1213906 logs.go:282] 0 containers: []
	W0407 13:38:13.855111 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:38:13.855118 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:38:13.855177 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:38:13.892476 1213906 cri.go:89] found id: ""
	I0407 13:38:13.892508 1213906 logs.go:282] 0 containers: []
	W0407 13:38:13.892519 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:38:13.892527 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:38:13.892595 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:38:13.927175 1213906 cri.go:89] found id: ""
	I0407 13:38:13.927208 1213906 logs.go:282] 0 containers: []
	W0407 13:38:13.927217 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:38:13.927224 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:38:13.927312 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:38:13.971556 1213906 cri.go:89] found id: ""
	I0407 13:38:13.971581 1213906 logs.go:282] 0 containers: []
	W0407 13:38:13.971591 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:38:13.971599 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:38:13.971662 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:38:14.011793 1213906 cri.go:89] found id: ""
	I0407 13:38:14.011824 1213906 logs.go:282] 0 containers: []
	W0407 13:38:14.011835 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:38:14.011843 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:38:14.011925 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:38:14.050493 1213906 cri.go:89] found id: ""
	I0407 13:38:14.050527 1213906 logs.go:282] 0 containers: []
	W0407 13:38:14.050538 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:38:14.050547 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:38:14.050617 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:38:14.085673 1213906 cri.go:89] found id: ""
	I0407 13:38:14.085724 1213906 logs.go:282] 0 containers: []
	W0407 13:38:14.085737 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:38:14.085746 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:38:14.085812 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:38:14.131856 1213906 cri.go:89] found id: ""
	I0407 13:38:14.131893 1213906 logs.go:282] 0 containers: []
	W0407 13:38:14.131906 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:38:14.131920 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:38:14.131937 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:38:14.185085 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:38:14.185138 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:38:14.199586 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:38:14.199625 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:38:14.277571 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:38:14.277604 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:38:14.277624 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:38:14.353802 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:38:14.353859 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:38:16.895403 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:38:16.909675 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:38:16.909846 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:38:16.945406 1213906 cri.go:89] found id: ""
	I0407 13:38:16.945455 1213906 logs.go:282] 0 containers: []
	W0407 13:38:16.945484 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:38:16.945494 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:38:16.945574 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:38:16.983588 1213906 cri.go:89] found id: ""
	I0407 13:38:16.983626 1213906 logs.go:282] 0 containers: []
	W0407 13:38:16.983638 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:38:16.983647 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:38:16.983717 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:38:17.020444 1213906 cri.go:89] found id: ""
	I0407 13:38:17.020487 1213906 logs.go:282] 0 containers: []
	W0407 13:38:17.020501 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:38:17.020510 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:38:17.020593 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:38:17.060614 1213906 cri.go:89] found id: ""
	I0407 13:38:17.060657 1213906 logs.go:282] 0 containers: []
	W0407 13:38:17.060669 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:38:17.060678 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:38:17.060762 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:38:17.105096 1213906 cri.go:89] found id: ""
	I0407 13:38:17.105136 1213906 logs.go:282] 0 containers: []
	W0407 13:38:17.105148 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:38:17.105156 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:38:17.105237 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:38:17.144101 1213906 cri.go:89] found id: ""
	I0407 13:38:17.144140 1213906 logs.go:282] 0 containers: []
	W0407 13:38:17.144156 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:38:17.144166 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:38:17.144242 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:38:17.190569 1213906 cri.go:89] found id: ""
	I0407 13:38:17.190602 1213906 logs.go:282] 0 containers: []
	W0407 13:38:17.190613 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:38:17.190621 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:38:17.190693 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:38:17.233997 1213906 cri.go:89] found id: ""
	I0407 13:38:17.234030 1213906 logs.go:282] 0 containers: []
	W0407 13:38:17.234039 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:38:17.234051 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:38:17.234065 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:38:17.321443 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:38:17.321495 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:38:17.370755 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:38:17.370794 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:38:17.429210 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:38:17.429268 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:38:17.444684 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:38:17.444722 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:38:17.522630 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:38:20.022948 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:38:20.037136 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:38:20.037218 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:38:20.076138 1213906 cri.go:89] found id: ""
	I0407 13:38:20.076168 1213906 logs.go:282] 0 containers: []
	W0407 13:38:20.076177 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:38:20.076183 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:38:20.076254 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:38:20.116308 1213906 cri.go:89] found id: ""
	I0407 13:38:20.116347 1213906 logs.go:282] 0 containers: []
	W0407 13:38:20.116357 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:38:20.116366 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:38:20.116425 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:38:20.154226 1213906 cri.go:89] found id: ""
	I0407 13:38:20.154261 1213906 logs.go:282] 0 containers: []
	W0407 13:38:20.154286 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:38:20.154293 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:38:20.154358 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:38:20.193534 1213906 cri.go:89] found id: ""
	I0407 13:38:20.193570 1213906 logs.go:282] 0 containers: []
	W0407 13:38:20.193581 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:38:20.193590 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:38:20.193658 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:38:20.233242 1213906 cri.go:89] found id: ""
	I0407 13:38:20.233280 1213906 logs.go:282] 0 containers: []
	W0407 13:38:20.233292 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:38:20.233300 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:38:20.233379 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:38:20.273298 1213906 cri.go:89] found id: ""
	I0407 13:38:20.273340 1213906 logs.go:282] 0 containers: []
	W0407 13:38:20.273354 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:38:20.273364 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:38:20.273483 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:38:20.317495 1213906 cri.go:89] found id: ""
	I0407 13:38:20.317538 1213906 logs.go:282] 0 containers: []
	W0407 13:38:20.317548 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:38:20.317554 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:38:20.317611 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:38:20.356020 1213906 cri.go:89] found id: ""
	I0407 13:38:20.356054 1213906 logs.go:282] 0 containers: []
	W0407 13:38:20.356063 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:38:20.356074 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:38:20.356087 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:38:20.424550 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:38:20.424618 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:38:20.444415 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:38:20.444454 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:38:20.533211 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:38:20.533242 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:38:20.533274 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:38:20.635661 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:38:20.635729 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:38:23.179699 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:38:23.195603 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:38:23.195701 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:38:23.247984 1213906 cri.go:89] found id: ""
	I0407 13:38:23.248021 1213906 logs.go:282] 0 containers: []
	W0407 13:38:23.248030 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:38:23.248037 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:38:23.248113 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:38:23.297330 1213906 cri.go:89] found id: ""
	I0407 13:38:23.297367 1213906 logs.go:282] 0 containers: []
	W0407 13:38:23.297380 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:38:23.297389 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:38:23.297465 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:38:23.342695 1213906 cri.go:89] found id: ""
	I0407 13:38:23.342732 1213906 logs.go:282] 0 containers: []
	W0407 13:38:23.342745 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:38:23.342754 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:38:23.342854 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:38:23.390557 1213906 cri.go:89] found id: ""
	I0407 13:38:23.390597 1213906 logs.go:282] 0 containers: []
	W0407 13:38:23.390610 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:38:23.390618 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:38:23.390693 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:38:23.436306 1213906 cri.go:89] found id: ""
	I0407 13:38:23.436431 1213906 logs.go:282] 0 containers: []
	W0407 13:38:23.436454 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:38:23.436465 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:38:23.436544 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:38:23.489592 1213906 cri.go:89] found id: ""
	I0407 13:38:23.489635 1213906 logs.go:282] 0 containers: []
	W0407 13:38:23.489647 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:38:23.489656 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:38:23.489757 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:38:23.549612 1213906 cri.go:89] found id: ""
	I0407 13:38:23.549665 1213906 logs.go:282] 0 containers: []
	W0407 13:38:23.549679 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:38:23.549688 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:38:23.549803 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:38:23.593793 1213906 cri.go:89] found id: ""
	I0407 13:38:23.593834 1213906 logs.go:282] 0 containers: []
	W0407 13:38:23.593846 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:38:23.593861 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:38:23.593882 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:38:23.613155 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:38:23.613214 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:38:23.692080 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:38:23.692115 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:38:23.692134 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:38:23.792659 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:38:23.792710 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:38:23.867830 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:38:23.867872 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:38:26.435191 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:38:26.450136 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:38:26.450228 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:38:26.486457 1213906 cri.go:89] found id: ""
	I0407 13:38:26.486498 1213906 logs.go:282] 0 containers: []
	W0407 13:38:26.486510 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:38:26.486520 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:38:26.486605 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:38:26.523604 1213906 cri.go:89] found id: ""
	I0407 13:38:26.523642 1213906 logs.go:282] 0 containers: []
	W0407 13:38:26.523655 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:38:26.523663 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:38:26.523737 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:38:26.563215 1213906 cri.go:89] found id: ""
	I0407 13:38:26.563253 1213906 logs.go:282] 0 containers: []
	W0407 13:38:26.563276 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:38:26.563284 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:38:26.563353 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:38:26.597983 1213906 cri.go:89] found id: ""
	I0407 13:38:26.598018 1213906 logs.go:282] 0 containers: []
	W0407 13:38:26.598030 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:38:26.598038 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:38:26.598111 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:38:26.636270 1213906 cri.go:89] found id: ""
	I0407 13:38:26.636304 1213906 logs.go:282] 0 containers: []
	W0407 13:38:26.636313 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:38:26.636323 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:38:26.636395 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:38:26.675668 1213906 cri.go:89] found id: ""
	I0407 13:38:26.675705 1213906 logs.go:282] 0 containers: []
	W0407 13:38:26.675717 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:38:26.675731 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:38:26.675828 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:38:26.713079 1213906 cri.go:89] found id: ""
	I0407 13:38:26.713109 1213906 logs.go:282] 0 containers: []
	W0407 13:38:26.713119 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:38:26.713126 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:38:26.713235 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:38:26.751390 1213906 cri.go:89] found id: ""
	I0407 13:38:26.751419 1213906 logs.go:282] 0 containers: []
	W0407 13:38:26.751434 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:38:26.751445 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:38:26.751457 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:38:26.792848 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:38:26.792890 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:38:26.846159 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:38:26.846214 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:38:26.860024 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:38:26.860061 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:38:26.935582 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:38:26.935610 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:38:26.935624 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:38:29.540180 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:38:29.563595 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:38:29.563702 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:38:29.624303 1213906 cri.go:89] found id: ""
	I0407 13:38:29.624337 1213906 logs.go:282] 0 containers: []
	W0407 13:38:29.624349 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:38:29.624357 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:38:29.624441 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:38:29.684330 1213906 cri.go:89] found id: ""
	I0407 13:38:29.684381 1213906 logs.go:282] 0 containers: []
	W0407 13:38:29.684394 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:38:29.684403 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:38:29.684477 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:38:29.730351 1213906 cri.go:89] found id: ""
	I0407 13:38:29.730381 1213906 logs.go:282] 0 containers: []
	W0407 13:38:29.730389 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:38:29.730396 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:38:29.730453 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:38:29.772407 1213906 cri.go:89] found id: ""
	I0407 13:38:29.772457 1213906 logs.go:282] 0 containers: []
	W0407 13:38:29.772467 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:38:29.772475 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:38:29.772543 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:38:29.811948 1213906 cri.go:89] found id: ""
	I0407 13:38:29.811986 1213906 logs.go:282] 0 containers: []
	W0407 13:38:29.811996 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:38:29.812003 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:38:29.812069 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:38:29.851337 1213906 cri.go:89] found id: ""
	I0407 13:38:29.851374 1213906 logs.go:282] 0 containers: []
	W0407 13:38:29.851383 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:38:29.851393 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:38:29.851459 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:38:29.892507 1213906 cri.go:89] found id: ""
	I0407 13:38:29.892559 1213906 logs.go:282] 0 containers: []
	W0407 13:38:29.892572 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:38:29.892580 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:38:29.892653 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:38:29.930354 1213906 cri.go:89] found id: ""
	I0407 13:38:29.930383 1213906 logs.go:282] 0 containers: []
	W0407 13:38:29.930391 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:38:29.930402 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:38:29.930416 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:38:29.983442 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:38:29.983491 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:38:29.999379 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:38:29.999425 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:38:30.079874 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:38:30.079899 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:38:30.079914 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:38:30.169022 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:38:30.169077 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:38:32.716109 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:38:32.730667 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:38:32.730752 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:38:32.769773 1213906 cri.go:89] found id: ""
	I0407 13:38:32.769811 1213906 logs.go:282] 0 containers: []
	W0407 13:38:32.769822 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:38:32.769830 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:38:32.769905 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:38:32.807846 1213906 cri.go:89] found id: ""
	I0407 13:38:32.807891 1213906 logs.go:282] 0 containers: []
	W0407 13:38:32.807903 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:38:32.807912 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:38:32.807976 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:38:32.843262 1213906 cri.go:89] found id: ""
	I0407 13:38:32.843299 1213906 logs.go:282] 0 containers: []
	W0407 13:38:32.843308 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:38:32.843314 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:38:32.843371 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:38:32.880004 1213906 cri.go:89] found id: ""
	I0407 13:38:32.880035 1213906 logs.go:282] 0 containers: []
	W0407 13:38:32.880045 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:38:32.880052 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:38:32.880106 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:38:32.920902 1213906 cri.go:89] found id: ""
	I0407 13:38:32.920945 1213906 logs.go:282] 0 containers: []
	W0407 13:38:32.920967 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:38:32.920975 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:38:32.921047 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:38:32.959333 1213906 cri.go:89] found id: ""
	I0407 13:38:32.959434 1213906 logs.go:282] 0 containers: []
	W0407 13:38:32.959451 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:38:32.959461 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:38:32.959536 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:38:32.998758 1213906 cri.go:89] found id: ""
	I0407 13:38:32.998789 1213906 logs.go:282] 0 containers: []
	W0407 13:38:32.998798 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:38:32.998805 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:38:32.998868 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:38:33.037549 1213906 cri.go:89] found id: ""
	I0407 13:38:33.037599 1213906 logs.go:282] 0 containers: []
	W0407 13:38:33.037611 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:38:33.037625 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:38:33.037643 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:38:33.089482 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:38:33.089534 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:38:33.105830 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:38:33.105881 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:38:33.195980 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:38:33.196012 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:38:33.196035 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:38:33.277068 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:38:33.277114 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:38:35.825938 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:38:35.841519 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:38:35.841639 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:38:35.881745 1213906 cri.go:89] found id: ""
	I0407 13:38:35.881776 1213906 logs.go:282] 0 containers: []
	W0407 13:38:35.881784 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:38:35.881790 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:38:35.881849 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:38:35.920762 1213906 cri.go:89] found id: ""
	I0407 13:38:35.920808 1213906 logs.go:282] 0 containers: []
	W0407 13:38:35.920825 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:38:35.920834 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:38:35.920911 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:38:35.959667 1213906 cri.go:89] found id: ""
	I0407 13:38:35.959702 1213906 logs.go:282] 0 containers: []
	W0407 13:38:35.959713 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:38:35.959722 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:38:35.959790 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:38:35.999234 1213906 cri.go:89] found id: ""
	I0407 13:38:35.999272 1213906 logs.go:282] 0 containers: []
	W0407 13:38:35.999281 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:38:35.999288 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:38:35.999354 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:38:36.033188 1213906 cri.go:89] found id: ""
	I0407 13:38:36.033222 1213906 logs.go:282] 0 containers: []
	W0407 13:38:36.033230 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:38:36.033237 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:38:36.033300 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:38:36.070676 1213906 cri.go:89] found id: ""
	I0407 13:38:36.070708 1213906 logs.go:282] 0 containers: []
	W0407 13:38:36.070716 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:38:36.070723 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:38:36.070795 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:38:36.109682 1213906 cri.go:89] found id: ""
	I0407 13:38:36.109733 1213906 logs.go:282] 0 containers: []
	W0407 13:38:36.109877 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:38:36.109890 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:38:36.109969 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:38:36.148177 1213906 cri.go:89] found id: ""
	I0407 13:38:36.148210 1213906 logs.go:282] 0 containers: []
	W0407 13:38:36.148222 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:38:36.148235 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:38:36.148251 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:38:36.200090 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:38:36.200143 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:38:36.215875 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:38:36.215913 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:38:36.291340 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:38:36.291367 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:38:36.291386 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:38:36.372013 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:38:36.372063 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:38:38.919128 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:38:38.935427 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:38:38.935501 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:38:38.974443 1213906 cri.go:89] found id: ""
	I0407 13:38:38.974495 1213906 logs.go:282] 0 containers: []
	W0407 13:38:38.974510 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:38:38.974518 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:38:38.974603 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:38:39.017761 1213906 cri.go:89] found id: ""
	I0407 13:38:39.017794 1213906 logs.go:282] 0 containers: []
	W0407 13:38:39.017805 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:38:39.017813 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:38:39.017931 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:38:39.065465 1213906 cri.go:89] found id: ""
	I0407 13:38:39.065507 1213906 logs.go:282] 0 containers: []
	W0407 13:38:39.065520 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:38:39.065534 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:38:39.065617 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:38:39.108343 1213906 cri.go:89] found id: ""
	I0407 13:38:39.108372 1213906 logs.go:282] 0 containers: []
	W0407 13:38:39.108380 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:38:39.108387 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:38:39.108443 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:38:39.149539 1213906 cri.go:89] found id: ""
	I0407 13:38:39.149581 1213906 logs.go:282] 0 containers: []
	W0407 13:38:39.149593 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:38:39.149602 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:38:39.149668 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:38:39.187279 1213906 cri.go:89] found id: ""
	I0407 13:38:39.187319 1213906 logs.go:282] 0 containers: []
	W0407 13:38:39.187331 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:38:39.187340 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:38:39.187401 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:38:39.234479 1213906 cri.go:89] found id: ""
	I0407 13:38:39.234515 1213906 logs.go:282] 0 containers: []
	W0407 13:38:39.234527 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:38:39.234534 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:38:39.234602 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:38:39.274727 1213906 cri.go:89] found id: ""
	I0407 13:38:39.274769 1213906 logs.go:282] 0 containers: []
	W0407 13:38:39.274781 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:38:39.274796 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:38:39.274822 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:38:39.290424 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:38:39.290465 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:38:39.372161 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:38:39.372206 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:38:39.372225 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:38:39.460143 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:38:39.460205 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:38:39.507102 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:38:39.507151 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:38:42.069892 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:38:42.086117 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:38:42.086206 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:38:42.122107 1213906 cri.go:89] found id: ""
	I0407 13:38:42.122233 1213906 logs.go:282] 0 containers: []
	W0407 13:38:42.122244 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:38:42.122252 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:38:42.122318 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:38:42.160525 1213906 cri.go:89] found id: ""
	I0407 13:38:42.160565 1213906 logs.go:282] 0 containers: []
	W0407 13:38:42.160578 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:38:42.160587 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:38:42.160654 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:38:42.200397 1213906 cri.go:89] found id: ""
	I0407 13:38:42.200440 1213906 logs.go:282] 0 containers: []
	W0407 13:38:42.200449 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:38:42.200456 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:38:42.200512 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:38:42.238021 1213906 cri.go:89] found id: ""
	I0407 13:38:42.238052 1213906 logs.go:282] 0 containers: []
	W0407 13:38:42.238063 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:38:42.238071 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:38:42.238138 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:38:42.274897 1213906 cri.go:89] found id: ""
	I0407 13:38:42.274928 1213906 logs.go:282] 0 containers: []
	W0407 13:38:42.274936 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:38:42.274942 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:38:42.275005 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:38:42.312776 1213906 cri.go:89] found id: ""
	I0407 13:38:42.312816 1213906 logs.go:282] 0 containers: []
	W0407 13:38:42.312828 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:38:42.312836 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:38:42.312909 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:38:42.357441 1213906 cri.go:89] found id: ""
	I0407 13:38:42.357477 1213906 logs.go:282] 0 containers: []
	W0407 13:38:42.357488 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:38:42.357497 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:38:42.357570 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:38:42.401331 1213906 cri.go:89] found id: ""
	I0407 13:38:42.401378 1213906 logs.go:282] 0 containers: []
	W0407 13:38:42.401392 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:38:42.401407 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:38:42.401425 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:38:42.458866 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:38:42.458912 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:38:42.474188 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:38:42.474232 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:38:42.551469 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:38:42.551505 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:38:42.551523 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:38:42.631784 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:38:42.631838 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:38:45.180211 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:38:45.195143 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:38:45.195224 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:38:45.234064 1213906 cri.go:89] found id: ""
	I0407 13:38:45.234094 1213906 logs.go:282] 0 containers: []
	W0407 13:38:45.234102 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:38:45.234109 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:38:45.234181 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:38:45.275519 1213906 cri.go:89] found id: ""
	I0407 13:38:45.275560 1213906 logs.go:282] 0 containers: []
	W0407 13:38:45.275572 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:38:45.275580 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:38:45.275648 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:38:45.309775 1213906 cri.go:89] found id: ""
	I0407 13:38:45.309810 1213906 logs.go:282] 0 containers: []
	W0407 13:38:45.309822 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:38:45.309830 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:38:45.309899 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:38:45.346395 1213906 cri.go:89] found id: ""
	I0407 13:38:45.346436 1213906 logs.go:282] 0 containers: []
	W0407 13:38:45.346449 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:38:45.346457 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:38:45.346526 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:38:45.385496 1213906 cri.go:89] found id: ""
	I0407 13:38:45.385534 1213906 logs.go:282] 0 containers: []
	W0407 13:38:45.385545 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:38:45.385554 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:38:45.385620 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:38:45.428386 1213906 cri.go:89] found id: ""
	I0407 13:38:45.428420 1213906 logs.go:282] 0 containers: []
	W0407 13:38:45.428432 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:38:45.428441 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:38:45.428509 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:38:45.466497 1213906 cri.go:89] found id: ""
	I0407 13:38:45.466537 1213906 logs.go:282] 0 containers: []
	W0407 13:38:45.466550 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:38:45.466558 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:38:45.466628 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:38:45.504968 1213906 cri.go:89] found id: ""
	I0407 13:38:45.505005 1213906 logs.go:282] 0 containers: []
	W0407 13:38:45.505041 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:38:45.505055 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:38:45.505072 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:38:45.558511 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:38:45.558570 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:38:45.574234 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:38:45.574280 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:38:45.654571 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:38:45.654605 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:38:45.654624 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:38:45.737856 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:38:45.737904 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:38:48.286601 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:38:48.300966 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:38:48.301056 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:38:48.335315 1213906 cri.go:89] found id: ""
	I0407 13:38:48.335345 1213906 logs.go:282] 0 containers: []
	W0407 13:38:48.335356 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:38:48.335365 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:38:48.335435 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:38:48.370562 1213906 cri.go:89] found id: ""
	I0407 13:38:48.370596 1213906 logs.go:282] 0 containers: []
	W0407 13:38:48.370605 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:38:48.370612 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:38:48.370678 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:38:48.409481 1213906 cri.go:89] found id: ""
	I0407 13:38:48.409518 1213906 logs.go:282] 0 containers: []
	W0407 13:38:48.409530 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:38:48.409539 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:38:48.409600 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:38:48.447172 1213906 cri.go:89] found id: ""
	I0407 13:38:48.447216 1213906 logs.go:282] 0 containers: []
	W0407 13:38:48.447228 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:38:48.447236 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:38:48.447306 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:38:48.483702 1213906 cri.go:89] found id: ""
	I0407 13:38:48.483741 1213906 logs.go:282] 0 containers: []
	W0407 13:38:48.483753 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:38:48.483762 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:38:48.483834 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:38:48.520405 1213906 cri.go:89] found id: ""
	I0407 13:38:48.520446 1213906 logs.go:282] 0 containers: []
	W0407 13:38:48.520462 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:38:48.520471 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:38:48.520542 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:38:48.565550 1213906 cri.go:89] found id: ""
	I0407 13:38:48.565590 1213906 logs.go:282] 0 containers: []
	W0407 13:38:48.565603 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:38:48.565612 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:38:48.565685 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:38:48.602827 1213906 cri.go:89] found id: ""
	I0407 13:38:48.602872 1213906 logs.go:282] 0 containers: []
	W0407 13:38:48.602884 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:38:48.602899 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:38:48.602917 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:38:48.656151 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:38:48.656204 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:38:48.671983 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:38:48.672017 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:38:48.740647 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:38:48.740670 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:38:48.740686 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:38:48.824176 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:38:48.824228 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:38:51.373850 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:38:51.390044 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:38:51.390124 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:38:51.430512 1213906 cri.go:89] found id: ""
	I0407 13:38:51.430551 1213906 logs.go:282] 0 containers: []
	W0407 13:38:51.430563 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:38:51.430571 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:38:51.430644 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:38:51.473130 1213906 cri.go:89] found id: ""
	I0407 13:38:51.473175 1213906 logs.go:282] 0 containers: []
	W0407 13:38:51.473187 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:38:51.473198 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:38:51.473275 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:38:51.523680 1213906 cri.go:89] found id: ""
	I0407 13:38:51.523710 1213906 logs.go:282] 0 containers: []
	W0407 13:38:51.523722 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:38:51.523730 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:38:51.523797 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:38:51.570305 1213906 cri.go:89] found id: ""
	I0407 13:38:51.570340 1213906 logs.go:282] 0 containers: []
	W0407 13:38:51.570352 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:38:51.570361 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:38:51.570430 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:38:51.608904 1213906 cri.go:89] found id: ""
	I0407 13:38:51.608943 1213906 logs.go:282] 0 containers: []
	W0407 13:38:51.608958 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:38:51.608967 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:38:51.609042 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:38:51.648489 1213906 cri.go:89] found id: ""
	I0407 13:38:51.648527 1213906 logs.go:282] 0 containers: []
	W0407 13:38:51.648538 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:38:51.648544 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:38:51.648617 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:38:51.684408 1213906 cri.go:89] found id: ""
	I0407 13:38:51.684458 1213906 logs.go:282] 0 containers: []
	W0407 13:38:51.684472 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:38:51.684479 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:38:51.684550 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:38:51.722650 1213906 cri.go:89] found id: ""
	I0407 13:38:51.722683 1213906 logs.go:282] 0 containers: []
	W0407 13:38:51.722694 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:38:51.722706 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:38:51.722723 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:38:51.783829 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:38:51.783872 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:38:51.800403 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:38:51.800457 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:38:51.881284 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:38:51.881312 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:38:51.881330 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:38:51.969272 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:38:51.969330 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:38:54.509915 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:38:54.525728 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:38:54.525810 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:38:54.562960 1213906 cri.go:89] found id: ""
	I0407 13:38:54.563008 1213906 logs.go:282] 0 containers: []
	W0407 13:38:54.563032 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:38:54.563041 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:38:54.563120 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:38:54.599012 1213906 cri.go:89] found id: ""
	I0407 13:38:54.599050 1213906 logs.go:282] 0 containers: []
	W0407 13:38:54.599061 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:38:54.599067 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:38:54.599133 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:38:54.636062 1213906 cri.go:89] found id: ""
	I0407 13:38:54.636098 1213906 logs.go:282] 0 containers: []
	W0407 13:38:54.636110 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:38:54.636119 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:38:54.636191 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:38:54.673237 1213906 cri.go:89] found id: ""
	I0407 13:38:54.673284 1213906 logs.go:282] 0 containers: []
	W0407 13:38:54.673294 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:38:54.673303 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:38:54.673373 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:38:54.711367 1213906 cri.go:89] found id: ""
	I0407 13:38:54.711405 1213906 logs.go:282] 0 containers: []
	W0407 13:38:54.711415 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:38:54.711421 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:38:54.711477 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:38:54.748521 1213906 cri.go:89] found id: ""
	I0407 13:38:54.748560 1213906 logs.go:282] 0 containers: []
	W0407 13:38:54.748571 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:38:54.748578 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:38:54.748640 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:38:54.783853 1213906 cri.go:89] found id: ""
	I0407 13:38:54.783897 1213906 logs.go:282] 0 containers: []
	W0407 13:38:54.783911 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:38:54.783919 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:38:54.784013 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:38:54.824967 1213906 cri.go:89] found id: ""
	I0407 13:38:54.825001 1213906 logs.go:282] 0 containers: []
	W0407 13:38:54.825014 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:38:54.825039 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:38:54.825053 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:38:54.876389 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:38:54.876437 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:38:54.891036 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:38:54.891078 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:38:54.968867 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:38:54.968893 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:38:54.968907 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:38:55.057526 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:38:55.057580 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:38:57.599084 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:38:57.613585 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:38:57.613829 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:38:57.656050 1213906 cri.go:89] found id: ""
	I0407 13:38:57.656087 1213906 logs.go:282] 0 containers: []
	W0407 13:38:57.656099 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:38:57.656109 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:38:57.656183 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:38:57.692833 1213906 cri.go:89] found id: ""
	I0407 13:38:57.692869 1213906 logs.go:282] 0 containers: []
	W0407 13:38:57.692881 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:38:57.692889 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:38:57.692969 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:38:57.732943 1213906 cri.go:89] found id: ""
	I0407 13:38:57.732979 1213906 logs.go:282] 0 containers: []
	W0407 13:38:57.732991 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:38:57.732999 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:38:57.733071 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:38:57.771650 1213906 cri.go:89] found id: ""
	I0407 13:38:57.771686 1213906 logs.go:282] 0 containers: []
	W0407 13:38:57.771699 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:38:57.771707 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:38:57.771775 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:38:57.827919 1213906 cri.go:89] found id: ""
	I0407 13:38:57.827960 1213906 logs.go:282] 0 containers: []
	W0407 13:38:57.827972 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:38:57.827981 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:38:57.828057 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:38:57.868385 1213906 cri.go:89] found id: ""
	I0407 13:38:57.868420 1213906 logs.go:282] 0 containers: []
	W0407 13:38:57.868432 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:38:57.868447 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:38:57.868512 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:38:57.910773 1213906 cri.go:89] found id: ""
	I0407 13:38:57.910806 1213906 logs.go:282] 0 containers: []
	W0407 13:38:57.910818 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:38:57.910825 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:38:57.910901 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:38:57.950813 1213906 cri.go:89] found id: ""
	I0407 13:38:57.950842 1213906 logs.go:282] 0 containers: []
	W0407 13:38:57.950851 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:38:57.950861 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:38:57.950874 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:38:58.003798 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:38:58.003847 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:38:58.018561 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:38:58.018602 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:38:58.098823 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:38:58.098849 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:38:58.098862 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:38:58.185299 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:38:58.185358 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:39:00.726985 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:39:00.742679 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:39:00.742766 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:39:00.778433 1213906 cri.go:89] found id: ""
	I0407 13:39:00.778468 1213906 logs.go:282] 0 containers: []
	W0407 13:39:00.778477 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:39:00.778486 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:39:00.778549 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:39:00.817702 1213906 cri.go:89] found id: ""
	I0407 13:39:00.817759 1213906 logs.go:282] 0 containers: []
	W0407 13:39:00.817769 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:39:00.817776 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:39:00.817835 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:39:00.854583 1213906 cri.go:89] found id: ""
	I0407 13:39:00.854619 1213906 logs.go:282] 0 containers: []
	W0407 13:39:00.854631 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:39:00.854640 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:39:00.854719 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:39:00.893439 1213906 cri.go:89] found id: ""
	I0407 13:39:00.893470 1213906 logs.go:282] 0 containers: []
	W0407 13:39:00.893485 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:39:00.893491 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:39:00.893547 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:39:00.933608 1213906 cri.go:89] found id: ""
	I0407 13:39:00.933645 1213906 logs.go:282] 0 containers: []
	W0407 13:39:00.933656 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:39:00.933664 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:39:00.933754 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:39:00.970581 1213906 cri.go:89] found id: ""
	I0407 13:39:00.970620 1213906 logs.go:282] 0 containers: []
	W0407 13:39:00.970632 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:39:00.970641 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:39:00.970718 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:39:01.010303 1213906 cri.go:89] found id: ""
	I0407 13:39:01.010334 1213906 logs.go:282] 0 containers: []
	W0407 13:39:01.010342 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:39:01.010349 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:39:01.010417 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:39:01.049082 1213906 cri.go:89] found id: ""
	I0407 13:39:01.049112 1213906 logs.go:282] 0 containers: []
	W0407 13:39:01.049123 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:39:01.049135 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:39:01.049150 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:39:01.098676 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:39:01.098723 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:39:01.112987 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:39:01.113025 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:39:01.181533 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:39:01.181560 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:39:01.181572 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:39:01.265712 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:39:01.265762 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:39:03.805818 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:39:03.821635 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:39:03.821755 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:39:03.861454 1213906 cri.go:89] found id: ""
	I0407 13:39:03.861491 1213906 logs.go:282] 0 containers: []
	W0407 13:39:03.861501 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:39:03.861509 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:39:03.861579 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:39:03.900456 1213906 cri.go:89] found id: ""
	I0407 13:39:03.900505 1213906 logs.go:282] 0 containers: []
	W0407 13:39:03.900515 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:39:03.900532 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:39:03.900610 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:39:03.946116 1213906 cri.go:89] found id: ""
	I0407 13:39:03.946159 1213906 logs.go:282] 0 containers: []
	W0407 13:39:03.946171 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:39:03.946181 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:39:03.946255 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:39:03.985824 1213906 cri.go:89] found id: ""
	I0407 13:39:03.985861 1213906 logs.go:282] 0 containers: []
	W0407 13:39:03.985874 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:39:03.985883 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:39:03.985955 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:39:04.027881 1213906 cri.go:89] found id: ""
	I0407 13:39:04.027916 1213906 logs.go:282] 0 containers: []
	W0407 13:39:04.027925 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:39:04.027934 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:39:04.028014 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:39:04.070439 1213906 cri.go:89] found id: ""
	I0407 13:39:04.070472 1213906 logs.go:282] 0 containers: []
	W0407 13:39:04.070487 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:39:04.070496 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:39:04.070565 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:39:04.112274 1213906 cri.go:89] found id: ""
	I0407 13:39:04.112308 1213906 logs.go:282] 0 containers: []
	W0407 13:39:04.112344 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:39:04.112355 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:39:04.112423 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:39:04.148741 1213906 cri.go:89] found id: ""
	I0407 13:39:04.148773 1213906 logs.go:282] 0 containers: []
	W0407 13:39:04.148781 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:39:04.148792 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:39:04.148810 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:39:04.196482 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:39:04.196543 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:39:04.257276 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:39:04.257336 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:39:04.280304 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:39:04.280344 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:39:04.366581 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:39:04.366614 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:39:04.366634 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:39:06.960903 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:39:06.976806 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:39:06.976910 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:39:07.015432 1213906 cri.go:89] found id: ""
	I0407 13:39:07.015469 1213906 logs.go:282] 0 containers: []
	W0407 13:39:07.015486 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:39:07.015495 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:39:07.015558 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:39:07.053971 1213906 cri.go:89] found id: ""
	I0407 13:39:07.054003 1213906 logs.go:282] 0 containers: []
	W0407 13:39:07.054014 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:39:07.054026 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:39:07.054096 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:39:07.090474 1213906 cri.go:89] found id: ""
	I0407 13:39:07.090504 1213906 logs.go:282] 0 containers: []
	W0407 13:39:07.090512 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:39:07.090519 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:39:07.090576 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:39:07.127866 1213906 cri.go:89] found id: ""
	I0407 13:39:07.127896 1213906 logs.go:282] 0 containers: []
	W0407 13:39:07.127905 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:39:07.127913 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:39:07.127983 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:39:07.166237 1213906 cri.go:89] found id: ""
	I0407 13:39:07.166275 1213906 logs.go:282] 0 containers: []
	W0407 13:39:07.166286 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:39:07.166293 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:39:07.166361 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:39:07.207982 1213906 cri.go:89] found id: ""
	I0407 13:39:07.208030 1213906 logs.go:282] 0 containers: []
	W0407 13:39:07.208042 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:39:07.208051 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:39:07.208131 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:39:07.244487 1213906 cri.go:89] found id: ""
	I0407 13:39:07.244520 1213906 logs.go:282] 0 containers: []
	W0407 13:39:07.244532 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:39:07.244540 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:39:07.244641 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:39:07.282147 1213906 cri.go:89] found id: ""
	I0407 13:39:07.282200 1213906 logs.go:282] 0 containers: []
	W0407 13:39:07.282212 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:39:07.282224 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:39:07.282245 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:39:07.296216 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:39:07.296260 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:39:07.365389 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:39:07.365413 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:39:07.365429 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:39:07.457333 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:39:07.457394 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:39:07.500677 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:39:07.500821 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:39:10.061891 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:39:10.085688 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:39:10.085804 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:39:10.129840 1213906 cri.go:89] found id: ""
	I0407 13:39:10.129878 1213906 logs.go:282] 0 containers: []
	W0407 13:39:10.129890 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:39:10.129898 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:39:10.129965 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:39:10.170088 1213906 cri.go:89] found id: ""
	I0407 13:39:10.170124 1213906 logs.go:282] 0 containers: []
	W0407 13:39:10.170136 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:39:10.170145 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:39:10.170224 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:39:10.217057 1213906 cri.go:89] found id: ""
	I0407 13:39:10.217101 1213906 logs.go:282] 0 containers: []
	W0407 13:39:10.217115 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:39:10.217125 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:39:10.217205 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:39:10.261057 1213906 cri.go:89] found id: ""
	I0407 13:39:10.261083 1213906 logs.go:282] 0 containers: []
	W0407 13:39:10.261092 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:39:10.261098 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:39:10.261239 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:39:10.298702 1213906 cri.go:89] found id: ""
	I0407 13:39:10.298743 1213906 logs.go:282] 0 containers: []
	W0407 13:39:10.298756 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:39:10.298764 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:39:10.298838 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:39:10.338689 1213906 cri.go:89] found id: ""
	I0407 13:39:10.338722 1213906 logs.go:282] 0 containers: []
	W0407 13:39:10.338733 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:39:10.338742 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:39:10.338816 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:39:10.374983 1213906 cri.go:89] found id: ""
	I0407 13:39:10.375076 1213906 logs.go:282] 0 containers: []
	W0407 13:39:10.375100 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:39:10.375111 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:39:10.375186 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:39:10.417715 1213906 cri.go:89] found id: ""
	I0407 13:39:10.417751 1213906 logs.go:282] 0 containers: []
	W0407 13:39:10.417764 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:39:10.417777 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:39:10.417792 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:39:10.502364 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:39:10.502411 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:39:10.562130 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:39:10.562173 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:39:10.635072 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:39:10.635135 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:39:10.654744 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:39:10.654786 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:39:10.747592 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:39:13.248636 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:39:13.265867 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:39:13.265984 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:39:13.313918 1213906 cri.go:89] found id: ""
	I0407 13:39:13.313967 1213906 logs.go:282] 0 containers: []
	W0407 13:39:13.313976 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:39:13.313983 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:39:13.314047 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:39:13.359217 1213906 cri.go:89] found id: ""
	I0407 13:39:13.359249 1213906 logs.go:282] 0 containers: []
	W0407 13:39:13.359260 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:39:13.359269 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:39:13.359408 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:39:13.407033 1213906 cri.go:89] found id: ""
	I0407 13:39:13.407069 1213906 logs.go:282] 0 containers: []
	W0407 13:39:13.407082 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:39:13.407091 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:39:13.407155 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:39:13.453529 1213906 cri.go:89] found id: ""
	I0407 13:39:13.453564 1213906 logs.go:282] 0 containers: []
	W0407 13:39:13.453577 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:39:13.453585 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:39:13.453653 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:39:13.500234 1213906 cri.go:89] found id: ""
	I0407 13:39:13.500262 1213906 logs.go:282] 0 containers: []
	W0407 13:39:13.500277 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:39:13.500287 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:39:13.500365 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:39:13.548352 1213906 cri.go:89] found id: ""
	I0407 13:39:13.548394 1213906 logs.go:282] 0 containers: []
	W0407 13:39:13.548406 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:39:13.548416 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:39:13.548485 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:39:13.590264 1213906 cri.go:89] found id: ""
	I0407 13:39:13.590295 1213906 logs.go:282] 0 containers: []
	W0407 13:39:13.590306 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:39:13.590313 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:39:13.590389 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:39:13.648538 1213906 cri.go:89] found id: ""
	I0407 13:39:13.648577 1213906 logs.go:282] 0 containers: []
	W0407 13:39:13.648591 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:39:13.648606 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:39:13.648625 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:39:13.725971 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:39:13.726030 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:39:13.748376 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:39:13.748432 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:39:13.840569 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:39:13.840600 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:39:13.840619 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:39:13.936462 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:39:13.936508 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:39:16.493932 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:39:16.510715 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:39:16.510799 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:39:16.562289 1213906 cri.go:89] found id: ""
	I0407 13:39:16.562331 1213906 logs.go:282] 0 containers: []
	W0407 13:39:16.562344 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:39:16.562353 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:39:16.562429 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:39:16.609775 1213906 cri.go:89] found id: ""
	I0407 13:39:16.609815 1213906 logs.go:282] 0 containers: []
	W0407 13:39:16.609828 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:39:16.609838 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:39:16.609921 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:39:16.661858 1213906 cri.go:89] found id: ""
	I0407 13:39:16.661894 1213906 logs.go:282] 0 containers: []
	W0407 13:39:16.661907 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:39:16.661914 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:39:16.661982 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:39:16.715903 1213906 cri.go:89] found id: ""
	I0407 13:39:16.715934 1213906 logs.go:282] 0 containers: []
	W0407 13:39:16.715942 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:39:16.715949 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:39:16.716018 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:39:16.765451 1213906 cri.go:89] found id: ""
	I0407 13:39:16.765490 1213906 logs.go:282] 0 containers: []
	W0407 13:39:16.765499 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:39:16.765506 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:39:16.765572 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:39:16.809602 1213906 cri.go:89] found id: ""
	I0407 13:39:16.809641 1213906 logs.go:282] 0 containers: []
	W0407 13:39:16.809654 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:39:16.809663 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:39:16.809767 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:39:16.854405 1213906 cri.go:89] found id: ""
	I0407 13:39:16.854439 1213906 logs.go:282] 0 containers: []
	W0407 13:39:16.854451 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:39:16.854458 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:39:16.854529 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:39:16.904597 1213906 cri.go:89] found id: ""
	I0407 13:39:16.904634 1213906 logs.go:282] 0 containers: []
	W0407 13:39:16.904645 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:39:16.904658 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:39:16.904676 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:39:16.956109 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:39:16.956152 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:39:17.016665 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:39:17.016720 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:39:17.037935 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:39:17.037985 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:39:17.137448 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:39:17.137476 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:39:17.137495 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:39:19.726803 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:39:19.742331 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:39:19.742441 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:39:19.787148 1213906 cri.go:89] found id: ""
	I0407 13:39:19.787270 1213906 logs.go:282] 0 containers: []
	W0407 13:39:19.787308 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:39:19.787326 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:39:19.787424 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:39:19.841461 1213906 cri.go:89] found id: ""
	I0407 13:39:19.841490 1213906 logs.go:282] 0 containers: []
	W0407 13:39:19.841498 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:39:19.841512 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:39:19.841567 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:39:19.885754 1213906 cri.go:89] found id: ""
	I0407 13:39:19.885793 1213906 logs.go:282] 0 containers: []
	W0407 13:39:19.885806 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:39:19.885814 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:39:19.885897 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:39:19.942984 1213906 cri.go:89] found id: ""
	I0407 13:39:19.943022 1213906 logs.go:282] 0 containers: []
	W0407 13:39:19.943035 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:39:19.943043 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:39:19.943118 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:39:19.985849 1213906 cri.go:89] found id: ""
	I0407 13:39:19.985890 1213906 logs.go:282] 0 containers: []
	W0407 13:39:19.985914 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:39:19.985924 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:39:19.986002 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:39:20.026472 1213906 cri.go:89] found id: ""
	I0407 13:39:20.026506 1213906 logs.go:282] 0 containers: []
	W0407 13:39:20.026519 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:39:20.026528 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:39:20.026647 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:39:20.066686 1213906 cri.go:89] found id: ""
	I0407 13:39:20.066724 1213906 logs.go:282] 0 containers: []
	W0407 13:39:20.066737 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:39:20.066745 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:39:20.066811 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:39:20.107031 1213906 cri.go:89] found id: ""
	I0407 13:39:20.107068 1213906 logs.go:282] 0 containers: []
	W0407 13:39:20.107080 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:39:20.107093 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:39:20.107108 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:39:20.206604 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:39:20.206658 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:39:20.253583 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:39:20.253629 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:39:20.312241 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:39:20.312311 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:39:20.331325 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:39:20.331377 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:39:20.417693 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:39:22.918405 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:39:22.931626 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:39:22.931693 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:39:22.966645 1213906 cri.go:89] found id: ""
	I0407 13:39:22.966678 1213906 logs.go:282] 0 containers: []
	W0407 13:39:22.966687 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:39:22.966693 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:39:22.966753 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:39:23.004234 1213906 cri.go:89] found id: ""
	I0407 13:39:23.004275 1213906 logs.go:282] 0 containers: []
	W0407 13:39:23.004287 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:39:23.004295 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:39:23.004364 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:39:23.042555 1213906 cri.go:89] found id: ""
	I0407 13:39:23.042592 1213906 logs.go:282] 0 containers: []
	W0407 13:39:23.042605 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:39:23.042613 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:39:23.042685 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:39:23.085559 1213906 cri.go:89] found id: ""
	I0407 13:39:23.085590 1213906 logs.go:282] 0 containers: []
	W0407 13:39:23.085602 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:39:23.085609 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:39:23.085681 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:39:23.122479 1213906 cri.go:89] found id: ""
	I0407 13:39:23.122517 1213906 logs.go:282] 0 containers: []
	W0407 13:39:23.122577 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:39:23.122591 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:39:23.122664 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:39:23.162640 1213906 cri.go:89] found id: ""
	I0407 13:39:23.162673 1213906 logs.go:282] 0 containers: []
	W0407 13:39:23.162683 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:39:23.162689 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:39:23.162754 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:39:23.205945 1213906 cri.go:89] found id: ""
	I0407 13:39:23.205975 1213906 logs.go:282] 0 containers: []
	W0407 13:39:23.205986 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:39:23.205994 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:39:23.206064 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:39:23.248167 1213906 cri.go:89] found id: ""
	I0407 13:39:23.248202 1213906 logs.go:282] 0 containers: []
	W0407 13:39:23.248213 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:39:23.248225 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:39:23.248242 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:39:23.301366 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:39:23.301410 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:39:23.315326 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:39:23.315361 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:39:23.397796 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:39:23.397819 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:39:23.397838 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:39:23.487156 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:39:23.487209 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:39:26.031068 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:39:26.049167 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:39:26.049254 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:39:26.096585 1213906 cri.go:89] found id: ""
	I0407 13:39:26.096618 1213906 logs.go:282] 0 containers: []
	W0407 13:39:26.096627 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:39:26.096642 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:39:26.096698 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:39:26.139747 1213906 cri.go:89] found id: ""
	I0407 13:39:26.139790 1213906 logs.go:282] 0 containers: []
	W0407 13:39:26.139804 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:39:26.139814 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:39:26.139894 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:39:26.175643 1213906 cri.go:89] found id: ""
	I0407 13:39:26.175673 1213906 logs.go:282] 0 containers: []
	W0407 13:39:26.175683 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:39:26.175689 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:39:26.175753 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:39:26.215493 1213906 cri.go:89] found id: ""
	I0407 13:39:26.215529 1213906 logs.go:282] 0 containers: []
	W0407 13:39:26.215540 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:39:26.215547 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:39:26.215623 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:39:26.254845 1213906 cri.go:89] found id: ""
	I0407 13:39:26.254878 1213906 logs.go:282] 0 containers: []
	W0407 13:39:26.254890 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:39:26.254898 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:39:26.254974 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:39:26.298835 1213906 cri.go:89] found id: ""
	I0407 13:39:26.298873 1213906 logs.go:282] 0 containers: []
	W0407 13:39:26.298885 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:39:26.298894 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:39:26.298997 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:39:26.343191 1213906 cri.go:89] found id: ""
	I0407 13:39:26.343237 1213906 logs.go:282] 0 containers: []
	W0407 13:39:26.343253 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:39:26.343262 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:39:26.343342 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:39:26.390932 1213906 cri.go:89] found id: ""
	I0407 13:39:26.390968 1213906 logs.go:282] 0 containers: []
	W0407 13:39:26.390981 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:39:26.390995 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:39:26.391018 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:39:26.462812 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:39:26.462862 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:39:26.479786 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:39:26.479834 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:39:26.561552 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:39:26.561585 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:39:26.561603 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:39:26.655451 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:39:26.655505 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:39:29.204255 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:39:29.218239 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:39:29.218335 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:39:29.255403 1213906 cri.go:89] found id: ""
	I0407 13:39:29.255455 1213906 logs.go:282] 0 containers: []
	W0407 13:39:29.255466 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:39:29.255474 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:39:29.255536 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:39:29.297474 1213906 cri.go:89] found id: ""
	I0407 13:39:29.297504 1213906 logs.go:282] 0 containers: []
	W0407 13:39:29.297517 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:39:29.297525 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:39:29.297592 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:39:29.336751 1213906 cri.go:89] found id: ""
	I0407 13:39:29.336802 1213906 logs.go:282] 0 containers: []
	W0407 13:39:29.336812 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:39:29.336820 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:39:29.336892 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:39:29.372685 1213906 cri.go:89] found id: ""
	I0407 13:39:29.372722 1213906 logs.go:282] 0 containers: []
	W0407 13:39:29.372734 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:39:29.372745 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:39:29.372819 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:39:29.415807 1213906 cri.go:89] found id: ""
	I0407 13:39:29.415928 1213906 logs.go:282] 0 containers: []
	W0407 13:39:29.415979 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:39:29.416011 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:39:29.416101 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:39:29.454937 1213906 cri.go:89] found id: ""
	I0407 13:39:29.454975 1213906 logs.go:282] 0 containers: []
	W0407 13:39:29.454986 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:39:29.454997 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:39:29.455066 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:39:29.497862 1213906 cri.go:89] found id: ""
	I0407 13:39:29.497894 1213906 logs.go:282] 0 containers: []
	W0407 13:39:29.497908 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:39:29.497930 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:39:29.498040 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:39:29.540136 1213906 cri.go:89] found id: ""
	I0407 13:39:29.540180 1213906 logs.go:282] 0 containers: []
	W0407 13:39:29.540195 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:39:29.540210 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:39:29.540227 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:39:29.597391 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:39:29.597448 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:39:29.614730 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:39:29.614781 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:39:29.686287 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:39:29.686313 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:39:29.686330 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:39:29.769604 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:39:29.769656 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:39:32.313604 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:39:32.329537 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:39:32.329647 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:39:32.366111 1213906 cri.go:89] found id: ""
	I0407 13:39:32.366151 1213906 logs.go:282] 0 containers: []
	W0407 13:39:32.366163 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:39:32.366173 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:39:32.366263 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:39:32.407513 1213906 cri.go:89] found id: ""
	I0407 13:39:32.407547 1213906 logs.go:282] 0 containers: []
	W0407 13:39:32.407559 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:39:32.407566 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:39:32.407638 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:39:32.447853 1213906 cri.go:89] found id: ""
	I0407 13:39:32.447888 1213906 logs.go:282] 0 containers: []
	W0407 13:39:32.447899 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:39:32.447908 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:39:32.447967 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:39:32.487113 1213906 cri.go:89] found id: ""
	I0407 13:39:32.487153 1213906 logs.go:282] 0 containers: []
	W0407 13:39:32.487166 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:39:32.487175 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:39:32.487250 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:39:32.528807 1213906 cri.go:89] found id: ""
	I0407 13:39:32.528846 1213906 logs.go:282] 0 containers: []
	W0407 13:39:32.528856 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:39:32.528863 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:39:32.528982 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:39:32.569409 1213906 cri.go:89] found id: ""
	I0407 13:39:32.569441 1213906 logs.go:282] 0 containers: []
	W0407 13:39:32.569449 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:39:32.569456 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:39:32.569513 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:39:32.619067 1213906 cri.go:89] found id: ""
	I0407 13:39:32.619112 1213906 logs.go:282] 0 containers: []
	W0407 13:39:32.619143 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:39:32.619151 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:39:32.619219 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:39:32.658443 1213906 cri.go:89] found id: ""
	I0407 13:39:32.658480 1213906 logs.go:282] 0 containers: []
	W0407 13:39:32.658525 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:39:32.658570 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:39:32.658590 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:39:32.713615 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:39:32.713664 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:39:32.729801 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:39:32.729846 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:39:32.801508 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:39:32.801548 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:39:32.801565 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:39:32.895979 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:39:32.896032 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:39:35.438760 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:39:35.451726 1213906 kubeadm.go:597] duration metric: took 4m1.971832367s to restartPrimaryControlPlane
	W0407 13:39:35.451823 1213906 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0407 13:39:35.451863 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0407 13:39:38.856923 1213906 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.405029234s)
	I0407 13:39:38.857045 1213906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:39:38.872950 1213906 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0407 13:39:38.884714 1213906 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 13:39:38.896092 1213906 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 13:39:38.896121 1213906 kubeadm.go:157] found existing configuration files:
	
	I0407 13:39:38.896187 1213906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 13:39:38.906638 1213906 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 13:39:38.906728 1213906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 13:39:38.918854 1213906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 13:39:38.930126 1213906 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 13:39:38.930211 1213906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 13:39:38.941103 1213906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 13:39:38.952040 1213906 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 13:39:38.952123 1213906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 13:39:38.964330 1213906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 13:39:38.975637 1213906 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 13:39:38.975720 1213906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 13:39:38.987660 1213906 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0407 13:39:39.067789 1213906 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0407 13:39:39.067905 1213906 kubeadm.go:310] [preflight] Running pre-flight checks
	I0407 13:39:39.228941 1213906 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0407 13:39:39.229130 1213906 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0407 13:39:39.229310 1213906 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0407 13:39:39.439348 1213906 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 13:39:39.441812 1213906 out.go:235]   - Generating certificates and keys ...
	I0407 13:39:39.441934 1213906 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0407 13:39:39.442048 1213906 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0407 13:39:39.442190 1213906 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0407 13:39:39.442286 1213906 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0407 13:39:39.442388 1213906 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0407 13:39:39.442476 1213906 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0407 13:39:39.442574 1213906 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0407 13:39:39.442671 1213906 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0407 13:39:39.442783 1213906 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0407 13:39:39.442887 1213906 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0407 13:39:39.442961 1213906 kubeadm.go:310] [certs] Using the existing "sa" key
	I0407 13:39:39.443048 1213906 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 13:39:39.599587 1213906 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 13:39:39.804532 1213906 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 13:39:39.933761 1213906 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 13:39:40.211240 1213906 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 13:39:40.229341 1213906 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 13:39:40.231205 1213906 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 13:39:40.231278 1213906 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0407 13:39:40.389203 1213906 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 13:39:40.391741 1213906 out.go:235]   - Booting up control plane ...
	I0407 13:39:40.391914 1213906 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 13:39:40.394425 1213906 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 13:39:40.395493 1213906 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 13:39:40.396308 1213906 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 13:39:40.408515 1213906 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0407 13:40:20.408705 1213906 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0407 13:40:20.408870 1213906 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:40:20.409184 1213906 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:40:25.409147 1213906 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:40:25.409413 1213906 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:40:35.409808 1213906 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:40:35.410041 1213906 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:40:55.410551 1213906 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:40:55.410847 1213906 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:41:35.412143 1213906 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:41:35.412418 1213906 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:41:35.412433 1213906 kubeadm.go:310] 
	I0407 13:41:35.412518 1213906 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0407 13:41:35.412618 1213906 kubeadm.go:310] 		timed out waiting for the condition
	I0407 13:41:35.412634 1213906 kubeadm.go:310] 
	I0407 13:41:35.412676 1213906 kubeadm.go:310] 	This error is likely caused by:
	I0407 13:41:35.412732 1213906 kubeadm.go:310] 		- The kubelet is not running
	I0407 13:41:35.412884 1213906 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0407 13:41:35.412896 1213906 kubeadm.go:310] 
	I0407 13:41:35.413037 1213906 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0407 13:41:35.413090 1213906 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0407 13:41:35.413138 1213906 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0407 13:41:35.413145 1213906 kubeadm.go:310] 
	I0407 13:41:35.413298 1213906 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0407 13:41:35.413410 1213906 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0407 13:41:35.413424 1213906 kubeadm.go:310] 
	I0407 13:41:35.413591 1213906 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0407 13:41:35.413754 1213906 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0407 13:41:35.413916 1213906 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0407 13:41:35.414045 1213906 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0407 13:41:35.414067 1213906 kubeadm.go:310] 
	I0407 13:41:35.414499 1213906 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0407 13:41:35.414627 1213906 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0407 13:41:35.414734 1213906 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0407 13:41:35.414956 1213906 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0407 13:41:35.415025 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0407 13:41:35.879085 1213906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:41:35.895908 1213906 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 13:41:35.910175 1213906 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 13:41:35.910202 1213906 kubeadm.go:157] found existing configuration files:
	
	I0407 13:41:35.910260 1213906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 13:41:35.921574 1213906 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 13:41:35.921654 1213906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 13:41:35.931683 1213906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 13:41:35.942826 1213906 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 13:41:35.942923 1213906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 13:41:35.954258 1213906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 13:41:35.966147 1213906 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 13:41:35.966221 1213906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 13:41:35.979123 1213906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 13:41:35.992598 1213906 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 13:41:35.992690 1213906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 13:41:36.005220 1213906 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0407 13:41:36.259453 1213906 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0407 13:43:32.418179 1213906 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0407 13:43:32.418361 1213906 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0407 13:43:32.420514 1213906 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0407 13:43:32.420604 1213906 kubeadm.go:310] [preflight] Running pre-flight checks
	I0407 13:43:32.420749 1213906 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0407 13:43:32.420904 1213906 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0407 13:43:32.421067 1213906 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0407 13:43:32.421182 1213906 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 13:43:32.424021 1213906 out.go:235]   - Generating certificates and keys ...
	I0407 13:43:32.424232 1213906 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0407 13:43:32.424366 1213906 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0407 13:43:32.424506 1213906 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0407 13:43:32.424615 1213906 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0407 13:43:32.424708 1213906 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0407 13:43:32.424788 1213906 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0407 13:43:32.424880 1213906 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0407 13:43:32.424967 1213906 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0407 13:43:32.425083 1213906 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0407 13:43:32.425204 1213906 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0407 13:43:32.425270 1213906 kubeadm.go:310] [certs] Using the existing "sa" key
	I0407 13:43:32.425345 1213906 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 13:43:32.425413 1213906 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 13:43:32.425495 1213906 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 13:43:32.425588 1213906 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 13:43:32.425669 1213906 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 13:43:32.425860 1213906 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 13:43:32.425989 1213906 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 13:43:32.426036 1213906 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0407 13:43:32.426137 1213906 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 13:43:32.428310 1213906 out.go:235]   - Booting up control plane ...
	I0407 13:43:32.428489 1213906 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 13:43:32.428638 1213906 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 13:43:32.428740 1213906 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 13:43:32.428861 1213906 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 13:43:32.429118 1213906 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0407 13:43:32.429180 1213906 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0407 13:43:32.429283 1213906 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:43:32.429534 1213906 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:43:32.429648 1213906 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:43:32.429960 1213906 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:43:32.430079 1213906 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:43:32.430342 1213906 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:43:32.430456 1213906 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:43:32.430733 1213906 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:43:32.430863 1213906 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:43:32.431145 1213906 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:43:32.431166 1213906 kubeadm.go:310] 
	I0407 13:43:32.431235 1213906 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0407 13:43:32.431298 1213906 kubeadm.go:310] 		timed out waiting for the condition
	I0407 13:43:32.431313 1213906 kubeadm.go:310] 
	I0407 13:43:32.431374 1213906 kubeadm.go:310] 	This error is likely caused by:
	I0407 13:43:32.431438 1213906 kubeadm.go:310] 		- The kubelet is not running
	I0407 13:43:32.431603 1213906 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0407 13:43:32.431627 1213906 kubeadm.go:310] 
	I0407 13:43:32.431775 1213906 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0407 13:43:32.431829 1213906 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0407 13:43:32.431870 1213906 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0407 13:43:32.431879 1213906 kubeadm.go:310] 
	I0407 13:43:32.432010 1213906 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0407 13:43:32.432141 1213906 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0407 13:43:32.432164 1213906 kubeadm.go:310] 
	I0407 13:43:32.432338 1213906 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0407 13:43:32.432452 1213906 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0407 13:43:32.432575 1213906 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0407 13:43:32.432671 1213906 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0407 13:43:32.432747 1213906 kubeadm.go:310] 
	I0407 13:43:32.432774 1213906 kubeadm.go:394] duration metric: took 7m59.00406937s to StartCluster
	I0407 13:43:32.432831 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:43:32.432921 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:43:32.472943 1213906 cri.go:89] found id: ""
	I0407 13:43:32.472977 1213906 logs.go:282] 0 containers: []
	W0407 13:43:32.472987 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:43:32.472994 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:43:32.473054 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:43:32.522049 1213906 cri.go:89] found id: ""
	I0407 13:43:32.522096 1213906 logs.go:282] 0 containers: []
	W0407 13:43:32.522111 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:43:32.522122 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:43:32.522349 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:43:32.574910 1213906 cri.go:89] found id: ""
	I0407 13:43:32.574967 1213906 logs.go:282] 0 containers: []
	W0407 13:43:32.574980 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:43:32.574990 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:43:32.575073 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:43:32.616222 1213906 cri.go:89] found id: ""
	I0407 13:43:32.616263 1213906 logs.go:282] 0 containers: []
	W0407 13:43:32.616274 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:43:32.616282 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:43:32.616363 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:43:32.667506 1213906 cri.go:89] found id: ""
	I0407 13:43:32.667552 1213906 logs.go:282] 0 containers: []
	W0407 13:43:32.667564 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:43:32.667576 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:43:32.667663 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:43:32.714537 1213906 cri.go:89] found id: ""
	I0407 13:43:32.714580 1213906 logs.go:282] 0 containers: []
	W0407 13:43:32.714594 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:43:32.714602 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:43:32.714679 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:43:32.755513 1213906 cri.go:89] found id: ""
	I0407 13:43:32.755548 1213906 logs.go:282] 0 containers: []
	W0407 13:43:32.755560 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:43:32.755570 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:43:32.755650 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:43:32.807417 1213906 cri.go:89] found id: ""
	I0407 13:43:32.807459 1213906 logs.go:282] 0 containers: []
	W0407 13:43:32.807472 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:43:32.807488 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:43:32.807508 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:43:32.872141 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:43:32.872195 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:43:32.887946 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:43:32.887997 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:43:32.970468 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:43:32.970504 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:43:32.970523 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:43:33.089367 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:43:33.089425 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0407 13:43:33.138580 1213906 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0407 13:43:33.138697 1213906 out.go:270] * 
	* 
	W0407 13:43:33.138777 1213906 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0407 13:43:33.138796 1213906 out.go:270] * 
	* 
	W0407 13:43:33.139698 1213906 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0407 13:43:33.143573 1213906 out.go:201] 
	W0407 13:43:33.145072 1213906 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0407 13:43:33.145155 1213906 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0407 13:43:33.145183 1213906 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0407 13:43:33.147286 1213906 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-435730 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-435730 -n old-k8s-version-435730
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-435730 -n old-k8s-version-435730: exit status 2 (280.355166ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-435730 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p embed-certs-931633            | embed-certs-931633           | jenkins | v1.35.0 | 07 Apr 25 13:35 UTC | 07 Apr 25 13:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-931633                                  | embed-certs-931633           | jenkins | v1.35.0 | 07 Apr 25 13:35 UTC | 07 Apr 25 13:36 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-950320                              | cert-expiration-950320       | jenkins | v1.35.0 | 07 Apr 25 13:36 UTC | 07 Apr 25 13:36 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-028452                  | no-preload-028452            | jenkins | v1.35.0 | 07 Apr 25 13:36 UTC | 07 Apr 25 13:36 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-028452                                   | no-preload-028452            | jenkins | v1.35.0 | 07 Apr 25 13:36 UTC | 07 Apr 25 13:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-931633                 | embed-certs-931633           | jenkins | v1.35.0 | 07 Apr 25 13:36 UTC | 07 Apr 25 13:36 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-931633                                  | embed-certs-931633           | jenkins | v1.35.0 | 07 Apr 25 13:36 UTC | 07 Apr 25 13:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-950320                              | cert-expiration-950320       | jenkins | v1.35.0 | 07 Apr 25 13:36 UTC | 07 Apr 25 13:36 UTC |
	| delete  | -p                                                     | disable-driver-mounts-696615 | jenkins | v1.35.0 | 07 Apr 25 13:36 UTC | 07 Apr 25 13:36 UTC |
	|         | disable-driver-mounts-696615                           |                              |         |         |                     |                     |
	| start   | -p pause-111763 --memory=2048                          | pause-111763                 | jenkins | v1.35.0 | 07 Apr 25 13:36 UTC | 07 Apr 25 13:38 UTC |
	|         | --install-addons=false                                 |                              |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                               |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p pause-111763                                        | pause-111763                 | jenkins | v1.35.0 | 07 Apr 25 13:38 UTC | 07 Apr 25 13:39 UTC |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p pause-111763                                        | pause-111763                 | jenkins | v1.35.0 | 07 Apr 25 13:39 UTC | 07 Apr 25 13:39 UTC |
	| start   | -p kubernetes-upgrade-973925                           | kubernetes-upgrade-973925    | jenkins | v1.35.0 | 07 Apr 25 13:39 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| image   | embed-certs-931633 image list                          | embed-certs-931633           | jenkins | v1.35.0 | 07 Apr 25 13:42 UTC | 07 Apr 25 13:42 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-931633                                  | embed-certs-931633           | jenkins | v1.35.0 | 07 Apr 25 13:42 UTC | 07 Apr 25 13:42 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-931633                                  | embed-certs-931633           | jenkins | v1.35.0 | 07 Apr 25 13:42 UTC | 07 Apr 25 13:42 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-931633                                  | embed-certs-931633           | jenkins | v1.35.0 | 07 Apr 25 13:42 UTC | 07 Apr 25 13:42 UTC |
	| delete  | -p embed-certs-931633                                  | embed-certs-931633           | jenkins | v1.35.0 | 07 Apr 25 13:42 UTC | 07 Apr 25 13:42 UTC |
	| start   | -p stopped-upgrade-392390                              | minikube                     | jenkins | v1.26.0 | 07 Apr 25 13:42 UTC |                     |
	|         | --memory=2200 --vm-driver=kvm2                         |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	| image   | no-preload-028452 image list                           | no-preload-028452            | jenkins | v1.35.0 | 07 Apr 25 13:43 UTC | 07 Apr 25 13:43 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-028452                                   | no-preload-028452            | jenkins | v1.35.0 | 07 Apr 25 13:43 UTC | 07 Apr 25 13:43 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-028452                                   | no-preload-028452            | jenkins | v1.35.0 | 07 Apr 25 13:43 UTC | 07 Apr 25 13:43 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-028452                                   | no-preload-028452            | jenkins | v1.35.0 | 07 Apr 25 13:43 UTC | 07 Apr 25 13:43 UTC |
	| delete  | -p no-preload-028452                                   | no-preload-028452            | jenkins | v1.35.0 | 07 Apr 25 13:43 UTC | 07 Apr 25 13:43 UTC |
	| start   | -p                                                     | default-k8s-diff-port-405061 | jenkins | v1.35.0 | 07 Apr 25 13:43 UTC |                     |
	|         | default-k8s-diff-port-405061                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 13:43:20
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 13:43:20.721227 1218480 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:43:20.721510 1218480 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:43:20.721520 1218480 out.go:358] Setting ErrFile to fd 2...
	I0407 13:43:20.721524 1218480 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:43:20.721811 1218480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1162386/.minikube/bin
	I0407 13:43:20.722518 1218480 out.go:352] Setting JSON to false
	I0407 13:43:20.723732 1218480 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":19545,"bootTime":1744013856,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 13:43:20.723803 1218480 start.go:139] virtualization: kvm guest
	I0407 13:43:20.726600 1218480 out.go:177] * [default-k8s-diff-port-405061] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 13:43:20.728330 1218480 notify.go:220] Checking for updates...
	I0407 13:43:20.728393 1218480 out.go:177]   - MINIKUBE_LOCATION=20602
	I0407 13:43:20.730483 1218480 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 13:43:20.732261 1218480 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20602-1162386/kubeconfig
	I0407 13:43:20.734048 1218480 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1162386/.minikube
	I0407 13:43:20.735886 1218480 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 13:43:20.737388 1218480 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 13:43:20.739713 1218480 config.go:182] Loaded profile config "kubernetes-upgrade-973925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0407 13:43:20.739849 1218480 config.go:182] Loaded profile config "old-k8s-version-435730": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0407 13:43:20.739939 1218480 config.go:182] Loaded profile config "stopped-upgrade-392390": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0407 13:43:20.740084 1218480 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 13:43:20.782499 1218480 out.go:177] * Using the kvm2 driver based on user configuration
	I0407 13:43:20.784081 1218480 start.go:297] selected driver: kvm2
	I0407 13:43:20.784108 1218480 start.go:901] validating driver "kvm2" against <nil>
	I0407 13:43:20.784125 1218480 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 13:43:20.785054 1218480 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 13:43:20.785182 1218480 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20602-1162386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0407 13:43:20.805165 1218480 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0407 13:43:20.805256 1218480 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 13:43:20.805540 1218480 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 13:43:20.805588 1218480 cni.go:84] Creating CNI manager for ""
	I0407 13:43:20.805637 1218480 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 13:43:20.805648 1218480 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0407 13:43:20.805730 1218480 start.go:340] cluster config:
	{Name:default-k8s-diff-port-405061 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-405061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cl
uster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:43:20.805874 1218480 iso.go:125] acquiring lock: {Name:mk51e1827709f7a3810dbd898083f8185ece65eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 13:43:20.808204 1218480 out.go:177] * Starting "default-k8s-diff-port-405061" primary control-plane node in "default-k8s-diff-port-405061" cluster
	I0407 13:43:20.810091 1218480 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 13:43:20.810160 1218480 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0407 13:43:20.810172 1218480 cache.go:56] Caching tarball of preloaded images
	I0407 13:43:20.810372 1218480 preload.go:172] Found /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0407 13:43:20.810414 1218480 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0407 13:43:20.810559 1218480 profile.go:143] Saving config to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/default-k8s-diff-port-405061/config.json ...
	I0407 13:43:20.810595 1218480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/default-k8s-diff-port-405061/config.json: {Name:mk985d9aec97e74e7aefc3008306730ca890d75f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:43:20.810855 1218480 start.go:360] acquireMachinesLock for default-k8s-diff-port-405061: {Name:mk51d4c744fa92d56cf6ac11b1e792c85ef6709a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 13:43:20.810921 1218480 start.go:364] duration metric: took 38.014µs to acquireMachinesLock for "default-k8s-diff-port-405061"
	I0407 13:43:20.810955 1218480 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-405061 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.32.2 ClusterName:default-k8s-diff-port-405061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0407 13:43:20.811066 1218480 start.go:125] createHost starting for "" (driver="kvm2")
	I0407 13:43:20.684277 1217742 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.965190415s)
	I0407 13:43:20.684303 1217742 crio.go:449] Took 2.965317 seconds t extract the tarball
	I0407 13:43:20.684316 1217742 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0407 13:43:20.729465 1217742 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 13:43:20.743806 1217742 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 13:43:20.756694 1217742 docker.go:179] disabling docker service ...
	I0407 13:43:20.756751 1217742 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0407 13:43:20.769450 1217742 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0407 13:43:20.781230 1217742 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0407 13:43:20.901200 1217742 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0407 13:43:21.039860 1217742 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0407 13:43:21.053097 1217742 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 13:43:21.074303 1217742 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.7"|' -i /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:43:21.084965 1217742 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 13:43:21.094074 1217742 crio.go:137] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0407 13:43:21.094128 1217742 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0407 13:43:21.107970 1217742 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 13:43:21.117199 1217742 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:43:21.230167 1217742 ssh_runner.go:195] Run: sudo systemctl start crio
	I0407 13:43:21.285958 1217742 start.go:447] Will wait 60s for socket path /var/run/crio/crio.sock
	I0407 13:43:21.286033 1217742 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0407 13:43:21.290918 1217742 start.go:468] Will wait 60s for crictl version
	I0407 13:43:21.290984 1217742 ssh_runner.go:195] Run: sudo crictl version
	I0407 13:43:21.321684 1217742 start.go:477] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.22.3
	RuntimeApiVersion:  v1alpha2
	I0407 13:43:21.321801 1217742 ssh_runner.go:195] Run: crio --version
	I0407 13:43:21.359681 1217742 ssh_runner.go:195] Run: crio --version
	I0407 13:43:21.400961 1217742 out.go:177] * Preparing Kubernetes v1.24.1 on CRI-O 1.22.3 ...
	I0407 13:43:21.402978 1217742 main.go:134] libmachine: (stopped-upgrade-392390) Calling .GetIP
	I0407 13:43:21.406716 1217742 main.go:134] libmachine: (stopped-upgrade-392390) DBG | domain stopped-upgrade-392390 has defined MAC address 52:54:00:6c:e5:9b in network mk-stopped-upgrade-392390
	I0407 13:43:21.407226 1217742 main.go:134] libmachine: (stopped-upgrade-392390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:e5:9b", ip: ""} in network mk-stopped-upgrade-392390: {Iface:virbr3 ExpiryTime:2025-04-07 14:43:03 +0000 UTC Type:0 Mac:52:54:00:6c:e5:9b Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:stopped-upgrade-392390 Clientid:01:52:54:00:6c:e5:9b}
	I0407 13:43:21.407256 1217742 main.go:134] libmachine: (stopped-upgrade-392390) DBG | domain stopped-upgrade-392390 has defined IP address 192.168.61.238 and MAC address 52:54:00:6c:e5:9b in network mk-stopped-upgrade-392390
	I0407 13:43:21.407563 1217742 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0407 13:43:21.411920 1217742 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:43:21.425481 1217742 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0407 13:43:21.425530 1217742 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 13:43:21.473814 1217742 crio.go:494] all images are preloaded for cri-o runtime.
	I0407 13:43:21.473828 1217742 crio.go:413] Images already preloaded, skipping extraction
	I0407 13:43:21.473899 1217742 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 13:43:21.501096 1217742 crio.go:494] all images are preloaded for cri-o runtime.
	I0407 13:43:21.501112 1217742 cache_images.go:84] Images are preloaded, skipping loading
	I0407 13:43:21.501178 1217742 ssh_runner.go:195] Run: crio config
	I0407 13:43:21.546238 1217742 cni.go:95] Creating CNI manager for ""
	I0407 13:43:21.546254 1217742 cni.go:165] "kvm2" driver + crio runtime found, recommending bridge
	I0407 13:43:21.546268 1217742 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0407 13:43:21.546305 1217742 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.238 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-392390 NodeName:stopped-upgrade-392390 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.61.238 CgroupDriver:systemd ClientCAFile:/var/lib/miniku
be/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0407 13:43:21.546488 1217742 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "stopped-upgrade-392390"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 13:43:21.546627 1217742 kubeadm.go:961] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=stopped-upgrade-392390 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.238 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-392390 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0407 13:43:21.546700 1217742 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0407 13:43:21.556279 1217742 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 13:43:21.556337 1217742 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 13:43:21.564761 1217742 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (481 bytes)
	I0407 13:43:21.580011 1217742 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 13:43:21.596060 1217742 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
	I0407 13:43:21.613142 1217742 ssh_runner.go:195] Run: grep 192.168.61.238	control-plane.minikube.internal$ /etc/hosts
	I0407 13:43:21.617292 1217742 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:43:21.631137 1217742 certs.go:54] Setting up /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/stopped-upgrade-392390 for IP: 192.168.61.238
	I0407 13:43:21.631318 1217742 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.key
	I0407 13:43:21.631365 1217742 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.key
	I0407 13:43:21.631482 1217742 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/stopped-upgrade-392390/client.key
	I0407 13:43:21.631498 1217742 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/stopped-upgrade-392390/client.crt with IP's: []
	I0407 13:43:21.890262 1217742 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/stopped-upgrade-392390/client.crt ...
	I0407 13:43:21.890280 1217742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/stopped-upgrade-392390/client.crt: {Name:mkd4718a2d1ca83cc3161a47da037a1fc6c26cc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:43:21.890501 1217742 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/stopped-upgrade-392390/client.key ...
	I0407 13:43:21.890508 1217742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/stopped-upgrade-392390/client.key: {Name:mk734abbcfb4e2404af0a27ce9b55858ecb37b40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:43:21.890604 1217742 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/stopped-upgrade-392390/apiserver.key.761b3e7f
	I0407 13:43:21.890615 1217742 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/stopped-upgrade-392390/apiserver.crt.761b3e7f with IP's: [192.168.61.238 10.96.0.1 127.0.0.1 10.0.0.1]
	I0407 13:43:22.166865 1217742 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/stopped-upgrade-392390/apiserver.crt.761b3e7f ...
	I0407 13:43:22.166887 1217742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/stopped-upgrade-392390/apiserver.crt.761b3e7f: {Name:mkaf5b9e3883829e4f360d2799d2648cfc6aa2b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:43:22.167128 1217742 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/stopped-upgrade-392390/apiserver.key.761b3e7f ...
	I0407 13:43:22.167138 1217742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/stopped-upgrade-392390/apiserver.key.761b3e7f: {Name:mk5ba3f02d0c7ff2d3ec1ab7bde3b35907fae1d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:43:22.167237 1217742 certs.go:320] copying /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/stopped-upgrade-392390/apiserver.crt.761b3e7f -> /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/stopped-upgrade-392390/apiserver.crt
	I0407 13:43:22.167292 1217742 certs.go:324] copying /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/stopped-upgrade-392390/apiserver.key.761b3e7f -> /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/stopped-upgrade-392390/apiserver.key
	I0407 13:43:22.167332 1217742 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/stopped-upgrade-392390/proxy-client.key
	I0407 13:43:22.167341 1217742 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/stopped-upgrade-392390/proxy-client.crt with IP's: []
	I0407 13:43:22.307456 1217742 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/stopped-upgrade-392390/proxy-client.crt ...
	I0407 13:43:22.307473 1217742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/stopped-upgrade-392390/proxy-client.crt: {Name:mkf61f6fc2ff013e5da99f7e2a67c29d9cd12afa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:43:22.307697 1217742 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/stopped-upgrade-392390/proxy-client.key ...
	I0407 13:43:22.307704 1217742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/stopped-upgrade-392390/proxy-client.key: {Name:mk2847ea52f207657001b2ca493e8eb24763015b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:43:22.307874 1217742 certs.go:388] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716.pem (1338 bytes)
	W0407 13:43:22.307905 1217742 certs.go:384] ignoring /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716_empty.pem, impossibly tiny 0 bytes
	I0407 13:43:22.307914 1217742 certs.go:388] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem (1675 bytes)
	I0407 13:43:22.307936 1217742 certs.go:388] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem (1078 bytes)
	I0407 13:43:22.307953 1217742 certs.go:388] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem (1123 bytes)
	I0407 13:43:22.307971 1217742 certs.go:388] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem (1675 bytes)
	I0407 13:43:22.308002 1217742 certs.go:388] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem (1708 bytes)
	I0407 13:43:22.308618 1217742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/stopped-upgrade-392390/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0407 13:43:22.334332 1217742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/stopped-upgrade-392390/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0407 13:43:22.360860 1217742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/stopped-upgrade-392390/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 13:43:22.388563 1217742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/stopped-upgrade-392390/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0407 13:43:22.416919 1217742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 13:43:22.440064 1217742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 13:43:22.462424 1217742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 13:43:22.486854 1217742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0407 13:43:22.511176 1217742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716.pem --> /usr/share/ca-certificates/1169716.pem (1338 bytes)
	I0407 13:43:22.534806 1217742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem --> /usr/share/ca-certificates/11697162.pem (1708 bytes)
	I0407 13:43:22.564301 1217742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 13:43:22.586953 1217742 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 13:43:22.603120 1217742 ssh_runner.go:195] Run: openssl version
	I0407 13:43:22.609465 1217742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 13:43:22.621794 1217742 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:43:22.626881 1217742 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:14 /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:43:22.626939 1217742 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:43:22.633179 1217742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 13:43:22.644741 1217742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1169716.pem && ln -fs /usr/share/ca-certificates/1169716.pem /etc/ssl/certs/1169716.pem"
	I0407 13:43:22.655411 1217742 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1169716.pem
	I0407 13:43:22.660100 1217742 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Apr  7 12:22 /usr/share/ca-certificates/1169716.pem
	I0407 13:43:22.660165 1217742 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1169716.pem
	I0407 13:43:22.666078 1217742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1169716.pem /etc/ssl/certs/51391683.0"
	I0407 13:43:22.675884 1217742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11697162.pem && ln -fs /usr/share/ca-certificates/11697162.pem /etc/ssl/certs/11697162.pem"
	I0407 13:43:22.686351 1217742 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11697162.pem
	I0407 13:43:22.692533 1217742 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Apr  7 12:22 /usr/share/ca-certificates/11697162.pem
	I0407 13:43:22.692609 1217742 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11697162.pem
	I0407 13:43:22.699254 1217742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11697162.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 13:43:22.710912 1217742 kubeadm.go:395] StartCluster: {Name:stopped-upgrade-392390 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-392390 Name
space:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.238 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custo
mQemuFirmwarePath:}
	I0407 13:43:22.711047 1217742 cri.go:52] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0407 13:43:22.711118 1217742 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0407 13:43:22.752475 1217742 cri.go:87] found id: ""
	I0407 13:43:22.752536 1217742 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0407 13:43:22.760794 1217742 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0407 13:43:22.771902 1217742 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 13:43:22.785205 1217742 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 13:43:22.785264 1217742 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0407 13:43:23.511857 1217742 out.go:204]   - Generating certificates and keys ...
	I0407 13:43:20.813296 1218480 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0407 13:43:20.813546 1218480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:43:20.813624 1218480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:43:20.830734 1218480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41539
	I0407 13:43:20.831310 1218480 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:43:20.831891 1218480 main.go:141] libmachine: Using API Version  1
	I0407 13:43:20.831915 1218480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:43:20.832309 1218480 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:43:20.832588 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) Calling .GetMachineName
	I0407 13:43:20.832914 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) Calling .DriverName
	I0407 13:43:20.833100 1218480 start.go:159] libmachine.API.Create for "default-k8s-diff-port-405061" (driver="kvm2")
	I0407 13:43:20.833143 1218480 client.go:168] LocalClient.Create starting
	I0407 13:43:20.833180 1218480 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem
	I0407 13:43:20.833215 1218480 main.go:141] libmachine: Decoding PEM data...
	I0407 13:43:20.833244 1218480 main.go:141] libmachine: Parsing certificate...
	I0407 13:43:20.833326 1218480 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem
	I0407 13:43:20.833358 1218480 main.go:141] libmachine: Decoding PEM data...
	I0407 13:43:20.833375 1218480 main.go:141] libmachine: Parsing certificate...
	I0407 13:43:20.833399 1218480 main.go:141] libmachine: Running pre-create checks...
	I0407 13:43:20.833413 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) Calling .PreCreateCheck
	I0407 13:43:20.833852 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) Calling .GetConfigRaw
	I0407 13:43:20.834333 1218480 main.go:141] libmachine: Creating machine...
	I0407 13:43:20.834356 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) Calling .Create
	I0407 13:43:20.834556 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) creating KVM machine...
	I0407 13:43:20.834572 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) creating network...
	I0407 13:43:20.836118 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | found existing default KVM network
	I0407 13:43:20.837186 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | I0407 13:43:20.836962 1218503 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:87:f6:77} reservation:<nil>}
	I0407 13:43:20.838363 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | I0407 13:43:20.838240 1218503 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:1b:3d:39} reservation:<nil>}
	I0407 13:43:20.839318 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | I0407 13:43:20.839207 1218503 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:8b:c7:85} reservation:<nil>}
	I0407 13:43:20.840443 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | I0407 13:43:20.840345 1218503 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002d1260}
	I0407 13:43:20.840501 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | created network xml: 
	I0407 13:43:20.840528 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | <network>
	I0407 13:43:20.840553 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG |   <name>mk-default-k8s-diff-port-405061</name>
	I0407 13:43:20.840572 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG |   <dns enable='no'/>
	I0407 13:43:20.840585 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG |   
	I0407 13:43:20.840599 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0407 13:43:20.840610 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG |     <dhcp>
	I0407 13:43:20.840623 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0407 13:43:20.840635 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG |     </dhcp>
	I0407 13:43:20.840655 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG |   </ip>
	I0407 13:43:20.840664 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG |   
	I0407 13:43:20.840672 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | </network>
	I0407 13:43:20.840681 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | 
	I0407 13:43:20.847017 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | trying to create private KVM network mk-default-k8s-diff-port-405061 192.168.72.0/24...
	I0407 13:43:20.937795 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | private KVM network mk-default-k8s-diff-port-405061 192.168.72.0/24 created
	I0407 13:43:20.938174 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) setting up store path in /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/default-k8s-diff-port-405061 ...
	I0407 13:43:20.938236 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) building disk image from file:///home/jenkins/minikube-integration/20602-1162386/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0407 13:43:20.938256 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | I0407 13:43:20.938097 1218503 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20602-1162386/.minikube
	I0407 13:43:20.938305 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) Downloading /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20602-1162386/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0407 13:43:21.256531 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | I0407 13:43:21.256376 1218503 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/default-k8s-diff-port-405061/id_rsa...
	I0407 13:43:21.369757 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | I0407 13:43:21.369530 1218503 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/default-k8s-diff-port-405061/default-k8s-diff-port-405061.rawdisk...
	I0407 13:43:21.369805 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | Writing magic tar header
	I0407 13:43:21.369825 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | Writing SSH key tar header
	I0407 13:43:21.369839 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | I0407 13:43:21.369675 1218503 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/default-k8s-diff-port-405061 ...
	I0407 13:43:21.369854 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) setting executable bit set on /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/default-k8s-diff-port-405061 (perms=drwx------)
	I0407 13:43:21.369874 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) setting executable bit set on /home/jenkins/minikube-integration/20602-1162386/.minikube/machines (perms=drwxr-xr-x)
	I0407 13:43:21.369887 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) setting executable bit set on /home/jenkins/minikube-integration/20602-1162386/.minikube (perms=drwxr-xr-x)
	I0407 13:43:21.369900 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/default-k8s-diff-port-405061
	I0407 13:43:21.369920 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines
	I0407 13:43:21.369933 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20602-1162386/.minikube
	I0407 13:43:21.369947 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20602-1162386
	I0407 13:43:21.369960 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0407 13:43:21.369970 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) setting executable bit set on /home/jenkins/minikube-integration/20602-1162386 (perms=drwxrwxr-x)
	I0407 13:43:21.370002 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | checking permissions on dir: /home/jenkins
	I0407 13:43:21.370019 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | checking permissions on dir: /home
	I0407 13:43:21.370027 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | skipping /home - not owner
	I0407 13:43:21.370119 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0407 13:43:21.370172 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0407 13:43:21.370202 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) creating domain...
	I0407 13:43:21.371513 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) define libvirt domain using xml: 
	I0407 13:43:21.371542 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) <domain type='kvm'>
	I0407 13:43:21.371564 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)   <name>default-k8s-diff-port-405061</name>
	I0407 13:43:21.371575 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)   <memory unit='MiB'>2200</memory>
	I0407 13:43:21.371584 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)   <vcpu>2</vcpu>
	I0407 13:43:21.371594 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)   <features>
	I0407 13:43:21.371601 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)     <acpi/>
	I0407 13:43:21.371609 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)     <apic/>
	I0407 13:43:21.371616 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)     <pae/>
	I0407 13:43:21.371622 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)     
	I0407 13:43:21.371630 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)   </features>
	I0407 13:43:21.371640 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)   <cpu mode='host-passthrough'>
	I0407 13:43:21.371691 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)   
	I0407 13:43:21.371726 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)   </cpu>
	I0407 13:43:21.371735 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)   <os>
	I0407 13:43:21.371743 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)     <type>hvm</type>
	I0407 13:43:21.371750 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)     <boot dev='cdrom'/>
	I0407 13:43:21.371757 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)     <boot dev='hd'/>
	I0407 13:43:21.371764 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)     <bootmenu enable='no'/>
	I0407 13:43:21.371771 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)   </os>
	I0407 13:43:21.371776 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)   <devices>
	I0407 13:43:21.371782 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)     <disk type='file' device='cdrom'>
	I0407 13:43:21.371794 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)       <source file='/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/default-k8s-diff-port-405061/boot2docker.iso'/>
	I0407 13:43:21.371799 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)       <target dev='hdc' bus='scsi'/>
	I0407 13:43:21.371809 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)       <readonly/>
	I0407 13:43:21.371817 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)     </disk>
	I0407 13:43:21.371876 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)     <disk type='file' device='disk'>
	I0407 13:43:21.371923 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0407 13:43:21.371947 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)       <source file='/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/default-k8s-diff-port-405061/default-k8s-diff-port-405061.rawdisk'/>
	I0407 13:43:21.371965 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)       <target dev='hda' bus='virtio'/>
	I0407 13:43:21.371978 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)     </disk>
	I0407 13:43:21.372001 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)     <interface type='network'>
	I0407 13:43:21.372017 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)       <source network='mk-default-k8s-diff-port-405061'/>
	I0407 13:43:21.372028 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)       <model type='virtio'/>
	I0407 13:43:21.372038 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)     </interface>
	I0407 13:43:21.372050 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)     <interface type='network'>
	I0407 13:43:21.372063 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)       <source network='default'/>
	I0407 13:43:21.372071 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)       <model type='virtio'/>
	I0407 13:43:21.372077 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)     </interface>
	I0407 13:43:21.372084 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)     <serial type='pty'>
	I0407 13:43:21.372092 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)       <target port='0'/>
	I0407 13:43:21.372102 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)     </serial>
	I0407 13:43:21.372111 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)     <console type='pty'>
	I0407 13:43:21.372125 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)       <target type='serial' port='0'/>
	I0407 13:43:21.372136 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)     </console>
	I0407 13:43:21.372155 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)     <rng model='virtio'>
	I0407 13:43:21.372167 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)       <backend model='random'>/dev/random</backend>
	I0407 13:43:21.372174 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)     </rng>
	I0407 13:43:21.372182 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)     
	I0407 13:43:21.372190 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)     
	I0407 13:43:21.372197 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061)   </devices>
	I0407 13:43:21.372206 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) </domain>
	I0407 13:43:21.372220 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) 
	I0407 13:43:21.377733 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | domain default-k8s-diff-port-405061 has defined MAC address 52:54:00:91:c0:bf in network default
	I0407 13:43:21.378826 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) starting domain...
	I0407 13:43:21.378860 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) ensuring networks are active...
	I0407 13:43:21.378874 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | domain default-k8s-diff-port-405061 has defined MAC address 52:54:00:52:9b:84 in network mk-default-k8s-diff-port-405061
	I0407 13:43:21.380017 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) Ensuring network default is active
	I0407 13:43:21.380391 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) Ensuring network mk-default-k8s-diff-port-405061 is active
	I0407 13:43:21.381235 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) getting domain XML...
	I0407 13:43:21.382516 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) creating domain...
	I0407 13:43:22.765302 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) waiting for IP...
	I0407 13:43:22.766117 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | domain default-k8s-diff-port-405061 has defined MAC address 52:54:00:52:9b:84 in network mk-default-k8s-diff-port-405061
	I0407 13:43:22.766752 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | unable to find current IP address of domain default-k8s-diff-port-405061 in network mk-default-k8s-diff-port-405061
	I0407 13:43:22.766850 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | I0407 13:43:22.766757 1218503 retry.go:31] will retry after 208.652669ms: waiting for domain to come up
	I0407 13:43:22.977664 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | domain default-k8s-diff-port-405061 has defined MAC address 52:54:00:52:9b:84 in network mk-default-k8s-diff-port-405061
	I0407 13:43:22.978389 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | unable to find current IP address of domain default-k8s-diff-port-405061 in network mk-default-k8s-diff-port-405061
	I0407 13:43:22.978421 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | I0407 13:43:22.978347 1218503 retry.go:31] will retry after 266.024356ms: waiting for domain to come up
	I0407 13:43:23.246059 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | domain default-k8s-diff-port-405061 has defined MAC address 52:54:00:52:9b:84 in network mk-default-k8s-diff-port-405061
	I0407 13:43:23.246760 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | unable to find current IP address of domain default-k8s-diff-port-405061 in network mk-default-k8s-diff-port-405061
	I0407 13:43:23.246800 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | I0407 13:43:23.246720 1218503 retry.go:31] will retry after 354.049081ms: waiting for domain to come up
	I0407 13:43:23.602766 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | domain default-k8s-diff-port-405061 has defined MAC address 52:54:00:52:9b:84 in network mk-default-k8s-diff-port-405061
	I0407 13:43:23.603329 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | unable to find current IP address of domain default-k8s-diff-port-405061 in network mk-default-k8s-diff-port-405061
	I0407 13:43:23.603397 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | I0407 13:43:23.603331 1218503 retry.go:31] will retry after 475.762654ms: waiting for domain to come up
	I0407 13:43:24.081499 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | domain default-k8s-diff-port-405061 has defined MAC address 52:54:00:52:9b:84 in network mk-default-k8s-diff-port-405061
	I0407 13:43:24.082246 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | unable to find current IP address of domain default-k8s-diff-port-405061 in network mk-default-k8s-diff-port-405061
	I0407 13:43:24.082317 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | I0407 13:43:24.082219 1218503 retry.go:31] will retry after 465.495444ms: waiting for domain to come up
	I0407 13:43:24.548877 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | domain default-k8s-diff-port-405061 has defined MAC address 52:54:00:52:9b:84 in network mk-default-k8s-diff-port-405061
	I0407 13:43:24.549354 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | unable to find current IP address of domain default-k8s-diff-port-405061 in network mk-default-k8s-diff-port-405061
	I0407 13:43:24.549385 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | I0407 13:43:24.549331 1218503 retry.go:31] will retry after 719.226379ms: waiting for domain to come up
	I0407 13:43:25.270405 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | domain default-k8s-diff-port-405061 has defined MAC address 52:54:00:52:9b:84 in network mk-default-k8s-diff-port-405061
	I0407 13:43:25.270973 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | unable to find current IP address of domain default-k8s-diff-port-405061 in network mk-default-k8s-diff-port-405061
	I0407 13:43:25.271008 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | I0407 13:43:25.270916 1218503 retry.go:31] will retry after 985.683292ms: waiting for domain to come up
	I0407 13:43:26.188516 1217742 out.go:204]   - Booting up control plane ...
	I0407 13:43:26.258443 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | domain default-k8s-diff-port-405061 has defined MAC address 52:54:00:52:9b:84 in network mk-default-k8s-diff-port-405061
	I0407 13:43:26.259065 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | unable to find current IP address of domain default-k8s-diff-port-405061 in network mk-default-k8s-diff-port-405061
	I0407 13:43:26.259099 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | I0407 13:43:26.259039 1218503 retry.go:31] will retry after 1.355724689s: waiting for domain to come up
	I0407 13:43:27.616983 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | domain default-k8s-diff-port-405061 has defined MAC address 52:54:00:52:9b:84 in network mk-default-k8s-diff-port-405061
	I0407 13:43:27.617535 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | unable to find current IP address of domain default-k8s-diff-port-405061 in network mk-default-k8s-diff-port-405061
	I0407 13:43:27.617571 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | I0407 13:43:27.617513 1218503 retry.go:31] will retry after 1.486626545s: waiting for domain to come up
	I0407 13:43:29.106541 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | domain default-k8s-diff-port-405061 has defined MAC address 52:54:00:52:9b:84 in network mk-default-k8s-diff-port-405061
	I0407 13:43:29.107156 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | unable to find current IP address of domain default-k8s-diff-port-405061 in network mk-default-k8s-diff-port-405061
	I0407 13:43:29.107180 1218480 main.go:141] libmachine: (default-k8s-diff-port-405061) DBG | I0407 13:43:29.107117 1218503 retry.go:31] will retry after 2.161936125s: waiting for domain to come up
	I0407 13:43:32.418179 1213906 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0407 13:43:32.418361 1213906 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0407 13:43:32.420514 1213906 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0407 13:43:32.420604 1213906 kubeadm.go:310] [preflight] Running pre-flight checks
	I0407 13:43:32.420749 1213906 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0407 13:43:32.420904 1213906 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0407 13:43:32.421067 1213906 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0407 13:43:32.421182 1213906 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 13:43:32.424021 1213906 out.go:235]   - Generating certificates and keys ...
	I0407 13:43:32.424232 1213906 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0407 13:43:32.424366 1213906 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0407 13:43:32.424506 1213906 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0407 13:43:32.424615 1213906 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0407 13:43:32.424708 1213906 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0407 13:43:32.424788 1213906 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0407 13:43:32.424880 1213906 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0407 13:43:32.424967 1213906 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0407 13:43:32.425083 1213906 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0407 13:43:32.425204 1213906 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0407 13:43:32.425270 1213906 kubeadm.go:310] [certs] Using the existing "sa" key
	I0407 13:43:32.425345 1213906 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 13:43:32.425413 1213906 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 13:43:32.425495 1213906 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 13:43:32.425588 1213906 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 13:43:32.425669 1213906 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 13:43:32.425860 1213906 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 13:43:32.425989 1213906 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 13:43:32.426036 1213906 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0407 13:43:32.426137 1213906 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 13:43:32.428310 1213906 out.go:235]   - Booting up control plane ...
	I0407 13:43:32.428489 1213906 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 13:43:32.428638 1213906 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 13:43:32.428740 1213906 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 13:43:32.428861 1213906 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 13:43:32.429118 1213906 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0407 13:43:32.429180 1213906 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0407 13:43:32.429283 1213906 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:43:32.429534 1213906 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:43:32.429648 1213906 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:43:32.429960 1213906 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:43:32.430079 1213906 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:43:32.430342 1213906 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:43:32.430456 1213906 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:43:32.430733 1213906 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:43:32.430863 1213906 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:43:32.431145 1213906 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:43:32.431166 1213906 kubeadm.go:310] 
	I0407 13:43:32.431235 1213906 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0407 13:43:32.431298 1213906 kubeadm.go:310] 		timed out waiting for the condition
	I0407 13:43:32.431313 1213906 kubeadm.go:310] 
	I0407 13:43:32.431374 1213906 kubeadm.go:310] 	This error is likely caused by:
	I0407 13:43:32.431438 1213906 kubeadm.go:310] 		- The kubelet is not running
	I0407 13:43:32.431603 1213906 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0407 13:43:32.431627 1213906 kubeadm.go:310] 
	I0407 13:43:32.431775 1213906 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0407 13:43:32.431829 1213906 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0407 13:43:32.431870 1213906 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0407 13:43:32.431879 1213906 kubeadm.go:310] 
	I0407 13:43:32.432010 1213906 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0407 13:43:32.432141 1213906 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0407 13:43:32.432164 1213906 kubeadm.go:310] 
	I0407 13:43:32.432338 1213906 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0407 13:43:32.432452 1213906 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0407 13:43:32.432575 1213906 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0407 13:43:32.432671 1213906 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0407 13:43:32.432747 1213906 kubeadm.go:310] 
	I0407 13:43:32.432774 1213906 kubeadm.go:394] duration metric: took 7m59.00406937s to StartCluster
	I0407 13:43:32.432831 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:43:32.432921 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:43:32.472943 1213906 cri.go:89] found id: ""
	I0407 13:43:32.472977 1213906 logs.go:282] 0 containers: []
	W0407 13:43:32.472987 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:43:32.472994 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:43:32.473054 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:43:32.522049 1213906 cri.go:89] found id: ""
	I0407 13:43:32.522096 1213906 logs.go:282] 0 containers: []
	W0407 13:43:32.522111 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:43:32.522122 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:43:32.522349 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:43:32.574910 1213906 cri.go:89] found id: ""
	I0407 13:43:32.574967 1213906 logs.go:282] 0 containers: []
	W0407 13:43:32.574980 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:43:32.574990 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:43:32.575073 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:43:32.616222 1213906 cri.go:89] found id: ""
	I0407 13:43:32.616263 1213906 logs.go:282] 0 containers: []
	W0407 13:43:32.616274 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:43:32.616282 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:43:32.616363 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:43:32.667506 1213906 cri.go:89] found id: ""
	I0407 13:43:32.667552 1213906 logs.go:282] 0 containers: []
	W0407 13:43:32.667564 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:43:32.667576 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:43:32.667663 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:43:32.714537 1213906 cri.go:89] found id: ""
	I0407 13:43:32.714580 1213906 logs.go:282] 0 containers: []
	W0407 13:43:32.714594 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:43:32.714602 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:43:32.714679 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:43:32.755513 1213906 cri.go:89] found id: ""
	I0407 13:43:32.755548 1213906 logs.go:282] 0 containers: []
	W0407 13:43:32.755560 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:43:32.755570 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:43:32.755650 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:43:32.807417 1213906 cri.go:89] found id: ""
	I0407 13:43:32.807459 1213906 logs.go:282] 0 containers: []
	W0407 13:43:32.807472 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:43:32.807488 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:43:32.807508 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:43:32.872141 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:43:32.872195 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:43:32.887946 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:43:32.887997 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:43:32.970468 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:43:32.970504 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:43:32.970523 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:43:33.089367 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:43:33.089425 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0407 13:43:33.138580 1213906 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0407 13:43:33.138697 1213906 out.go:270] * 
	W0407 13:43:33.138777 1213906 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0407 13:43:33.138796 1213906 out.go:270] * 
	W0407 13:43:33.139698 1213906 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0407 13:43:33.143573 1213906 out.go:201] 
	W0407 13:43:33.145072 1213906 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0407 13:43:33.145155 1213906 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0407 13:43:33.145183 1213906 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0407 13:43:33.147286 1213906 out.go:201] 
	
	
	==> CRI-O <==
	Apr 07 13:43:34 old-k8s-version-435730 crio[627]: time="2025-04-07 13:43:34.253109948Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744033414253076495,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e3090bd0-4ee1-4a63-998c-1a0e517fa133 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:43:34 old-k8s-version-435730 crio[627]: time="2025-04-07 13:43:34.253888626Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=29703c77-7e8f-44c4-9356-0291f295d901 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:43:34 old-k8s-version-435730 crio[627]: time="2025-04-07 13:43:34.253977317Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=29703c77-7e8f-44c4-9356-0291f295d901 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:43:34 old-k8s-version-435730 crio[627]: time="2025-04-07 13:43:34.254077317Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=29703c77-7e8f-44c4-9356-0291f295d901 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:43:34 old-k8s-version-435730 crio[627]: time="2025-04-07 13:43:34.287738594Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3b431586-4fb3-42c2-918c-beee74d857b3 name=/runtime.v1.RuntimeService/Version
	Apr 07 13:43:34 old-k8s-version-435730 crio[627]: time="2025-04-07 13:43:34.287825162Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3b431586-4fb3-42c2-918c-beee74d857b3 name=/runtime.v1.RuntimeService/Version
	Apr 07 13:43:34 old-k8s-version-435730 crio[627]: time="2025-04-07 13:43:34.289056072Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a388a8c9-5ee9-4284-8280-c6ca61ef6162 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:43:34 old-k8s-version-435730 crio[627]: time="2025-04-07 13:43:34.289420830Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744033414289399409,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a388a8c9-5ee9-4284-8280-c6ca61ef6162 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:43:34 old-k8s-version-435730 crio[627]: time="2025-04-07 13:43:34.290125987Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cbe7342f-4103-4f78-b0a8-20bfab6995d0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:43:34 old-k8s-version-435730 crio[627]: time="2025-04-07 13:43:34.290189190Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cbe7342f-4103-4f78-b0a8-20bfab6995d0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:43:34 old-k8s-version-435730 crio[627]: time="2025-04-07 13:43:34.290252882Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cbe7342f-4103-4f78-b0a8-20bfab6995d0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:43:34 old-k8s-version-435730 crio[627]: time="2025-04-07 13:43:34.324031507Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1ef60429-cc3e-437b-a114-d782dc2fbf12 name=/runtime.v1.RuntimeService/Version
	Apr 07 13:43:34 old-k8s-version-435730 crio[627]: time="2025-04-07 13:43:34.324114387Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1ef60429-cc3e-437b-a114-d782dc2fbf12 name=/runtime.v1.RuntimeService/Version
	Apr 07 13:43:34 old-k8s-version-435730 crio[627]: time="2025-04-07 13:43:34.325434610Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=de21850b-99f3-4c13-8d7f-a516fde3e0ee name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:43:34 old-k8s-version-435730 crio[627]: time="2025-04-07 13:43:34.325854119Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744033414325829917,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=de21850b-99f3-4c13-8d7f-a516fde3e0ee name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:43:34 old-k8s-version-435730 crio[627]: time="2025-04-07 13:43:34.326730948Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d43e77ab-d0f0-449b-a0ca-52e2d2a8b2ab name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:43:34 old-k8s-version-435730 crio[627]: time="2025-04-07 13:43:34.326806758Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d43e77ab-d0f0-449b-a0ca-52e2d2a8b2ab name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:43:34 old-k8s-version-435730 crio[627]: time="2025-04-07 13:43:34.326842904Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d43e77ab-d0f0-449b-a0ca-52e2d2a8b2ab name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:43:34 old-k8s-version-435730 crio[627]: time="2025-04-07 13:43:34.361248818Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ff4dbcd7-9df4-4eaa-b1d9-5f5a7cb9b6b4 name=/runtime.v1.RuntimeService/Version
	Apr 07 13:43:34 old-k8s-version-435730 crio[627]: time="2025-04-07 13:43:34.361343919Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ff4dbcd7-9df4-4eaa-b1d9-5f5a7cb9b6b4 name=/runtime.v1.RuntimeService/Version
	Apr 07 13:43:34 old-k8s-version-435730 crio[627]: time="2025-04-07 13:43:34.363029715Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bf6c34b7-700d-43f6-9701-4d76047e792e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:43:34 old-k8s-version-435730 crio[627]: time="2025-04-07 13:43:34.363469183Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744033414363444952,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bf6c34b7-700d-43f6-9701-4d76047e792e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:43:34 old-k8s-version-435730 crio[627]: time="2025-04-07 13:43:34.364182612Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=52e3b128-498b-4163-9045-b14561d3acc5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:43:34 old-k8s-version-435730 crio[627]: time="2025-04-07 13:43:34.364232399Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=52e3b128-498b-4163-9045-b14561d3acc5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:43:34 old-k8s-version-435730 crio[627]: time="2025-04-07 13:43:34.364267816Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=52e3b128-498b-4163-9045-b14561d3acc5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr 7 13:35] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054242] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042193] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.063295] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.360663] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.644180] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.768056] systemd-fstab-generator[555]: Ignoring "noauto" option for root device
	[  +0.063980] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066857] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.213937] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.125997] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.267658] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +7.911500] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.062381] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.305323] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[ +10.593685] kauditd_printk_skb: 46 callbacks suppressed
	[Apr 7 13:39] systemd-fstab-generator[4897]: Ignoring "noauto" option for root device
	[Apr 7 13:41] systemd-fstab-generator[5176]: Ignoring "noauto" option for root device
	[  +0.065808] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 13:43:34 up 8 min,  0 users,  load average: 0.26, 0.19, 0.10
	Linux old-k8s-version-435730 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 07 13:43:33 old-k8s-version-435730 kubelet[5356]: internal/singleflight.(*Group).doCall(0x70c5750, 0xc000bf0140, 0xc000c095c0, 0x23, 0xc000c1c940)
	Apr 07 13:43:33 old-k8s-version-435730 kubelet[5356]:         /usr/local/go/src/internal/singleflight/singleflight.go:95 +0x2e
	Apr 07 13:43:33 old-k8s-version-435730 kubelet[5356]: created by internal/singleflight.(*Group).DoChan
	Apr 07 13:43:33 old-k8s-version-435730 kubelet[5356]:         /usr/local/go/src/internal/singleflight/singleflight.go:88 +0x2cc
	Apr 07 13:43:33 old-k8s-version-435730 kubelet[5356]: goroutine 136 [runnable]:
	Apr 07 13:43:33 old-k8s-version-435730 kubelet[5356]: net._C2func_getaddrinfo(0xc000be00c0, 0x0, 0xc000c1f800, 0xc000b0e068, 0x0, 0x0, 0x0)
	Apr 07 13:43:33 old-k8s-version-435730 kubelet[5356]:         _cgo_gotypes.go:94 +0x55
	Apr 07 13:43:33 old-k8s-version-435730 kubelet[5356]: net.cgoLookupIPCNAME.func1(0xc000be00c0, 0x20, 0x20, 0xc000c1f800, 0xc000b0e068, 0x0, 0xc000d34ea0, 0x57a492)
	Apr 07 13:43:33 old-k8s-version-435730 kubelet[5356]:         /usr/local/go/src/net/cgo_unix.go:161 +0xc5
	Apr 07 13:43:33 old-k8s-version-435730 kubelet[5356]: net.cgoLookupIPCNAME(0x48ab5d6, 0x3, 0xc000c09590, 0x1f, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	Apr 07 13:43:33 old-k8s-version-435730 kubelet[5356]:         /usr/local/go/src/net/cgo_unix.go:161 +0x16b
	Apr 07 13:43:33 old-k8s-version-435730 kubelet[5356]: net.cgoIPLookup(0xc00021a960, 0x48ab5d6, 0x3, 0xc000c09590, 0x1f)
	Apr 07 13:43:33 old-k8s-version-435730 kubelet[5356]:         /usr/local/go/src/net/cgo_unix.go:218 +0x67
	Apr 07 13:43:33 old-k8s-version-435730 kubelet[5356]: created by net.cgoLookupIP
	Apr 07 13:43:33 old-k8s-version-435730 kubelet[5356]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	Apr 07 13:43:33 old-k8s-version-435730 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 07 13:43:33 old-k8s-version-435730 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 07 13:43:33 old-k8s-version-435730 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Apr 07 13:43:33 old-k8s-version-435730 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 07 13:43:33 old-k8s-version-435730 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 07 13:43:33 old-k8s-version-435730 kubelet[5420]: I0407 13:43:33.804889    5420 server.go:416] Version: v1.20.0
	Apr 07 13:43:33 old-k8s-version-435730 kubelet[5420]: I0407 13:43:33.805215    5420 server.go:837] Client rotation is on, will bootstrap in background
	Apr 07 13:43:33 old-k8s-version-435730 kubelet[5420]: I0407 13:43:33.807506    5420 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 07 13:43:33 old-k8s-version-435730 kubelet[5420]: W0407 13:43:33.808682    5420 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 07 13:43:33 old-k8s-version-435730 kubelet[5420]: I0407 13:43:33.809085    5420 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-435730 -n old-k8s-version-435730
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-435730 -n old-k8s-version-435730: exit status 2 (262.582234ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-435730" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (512.36s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (61.05s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-111763 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-111763 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (56.163124384s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-111763] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20602-1162386/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1162386/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-111763" primary control-plane node in "pause-111763" cluster
	* Updating the running kvm2 "pause-111763" VM ...
	* Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-111763" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:38:12.710152 1215662 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:38:12.710725 1215662 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:38:12.710740 1215662 out.go:358] Setting ErrFile to fd 2...
	I0407 13:38:12.710746 1215662 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:38:12.711005 1215662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1162386/.minikube/bin
	I0407 13:38:12.711660 1215662 out.go:352] Setting JSON to false
	I0407 13:38:12.712905 1215662 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":19237,"bootTime":1744013856,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 13:38:12.713058 1215662 start.go:139] virtualization: kvm guest
	I0407 13:38:12.715805 1215662 out.go:177] * [pause-111763] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 13:38:12.718419 1215662 out.go:177]   - MINIKUBE_LOCATION=20602
	I0407 13:38:12.718416 1215662 notify.go:220] Checking for updates...
	I0407 13:38:12.720692 1215662 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 13:38:12.722446 1215662 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20602-1162386/kubeconfig
	I0407 13:38:12.724231 1215662 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1162386/.minikube
	I0407 13:38:12.726029 1215662 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 13:38:12.728023 1215662 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 13:38:12.730279 1215662 config.go:182] Loaded profile config "pause-111763": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:38:12.730845 1215662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:38:12.730963 1215662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:38:12.749257 1215662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34461
	I0407 13:38:12.749865 1215662 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:38:12.750543 1215662 main.go:141] libmachine: Using API Version  1
	I0407 13:38:12.750580 1215662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:38:12.751100 1215662 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:38:12.751351 1215662 main.go:141] libmachine: (pause-111763) Calling .DriverName
	I0407 13:38:12.751650 1215662 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 13:38:12.752062 1215662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:38:12.752116 1215662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:38:12.770118 1215662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33017
	I0407 13:38:12.770628 1215662 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:38:12.771181 1215662 main.go:141] libmachine: Using API Version  1
	I0407 13:38:12.771210 1215662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:38:12.771638 1215662 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:38:12.771850 1215662 main.go:141] libmachine: (pause-111763) Calling .DriverName
	I0407 13:38:12.815130 1215662 out.go:177] * Using the kvm2 driver based on existing profile
	I0407 13:38:12.816745 1215662 start.go:297] selected driver: kvm2
	I0407 13:38:12.816770 1215662 start.go:901] validating driver "kvm2" against &{Name:pause-111763 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pa
use-111763 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.135 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm
:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:38:12.816913 1215662 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 13:38:12.817352 1215662 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 13:38:12.817464 1215662 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20602-1162386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0407 13:38:12.835337 1215662 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0407 13:38:12.836313 1215662 cni.go:84] Creating CNI manager for ""
	I0407 13:38:12.836389 1215662 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 13:38:12.836477 1215662 start.go:340] cluster config:
	{Name:pause-111763 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pause-111763 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.135 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-alias
es:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:38:12.836671 1215662 iso.go:125] acquiring lock: {Name:mk51e1827709f7a3810dbd898083f8185ece65eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 13:38:12.839010 1215662 out.go:177] * Starting "pause-111763" primary control-plane node in "pause-111763" cluster
	I0407 13:38:12.840629 1215662 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 13:38:12.840698 1215662 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0407 13:38:12.840712 1215662 cache.go:56] Caching tarball of preloaded images
	I0407 13:38:12.840821 1215662 preload.go:172] Found /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0407 13:38:12.840839 1215662 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0407 13:38:12.841035 1215662 profile.go:143] Saving config to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/pause-111763/config.json ...
	I0407 13:38:12.841289 1215662 start.go:360] acquireMachinesLock for pause-111763: {Name:mk51d4c744fa92d56cf6ac11b1e792c85ef6709a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 13:38:12.841342 1215662 start.go:364] duration metric: took 30.538µs to acquireMachinesLock for "pause-111763"
	I0407 13:38:12.841364 1215662 start.go:96] Skipping create...Using existing machine configuration
	I0407 13:38:12.841373 1215662 fix.go:54] fixHost starting: 
	I0407 13:38:12.841694 1215662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:38:12.841751 1215662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:38:12.858568 1215662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46103
	I0407 13:38:12.859080 1215662 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:38:12.859632 1215662 main.go:141] libmachine: Using API Version  1
	I0407 13:38:12.859650 1215662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:38:12.860076 1215662 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:38:12.860356 1215662 main.go:141] libmachine: (pause-111763) Calling .DriverName
	I0407 13:38:12.860590 1215662 main.go:141] libmachine: (pause-111763) Calling .GetState
	I0407 13:38:12.862792 1215662 fix.go:112] recreateIfNeeded on pause-111763: state=Running err=<nil>
	W0407 13:38:12.862823 1215662 fix.go:138] unexpected machine state, will restart: <nil>
	I0407 13:38:12.865411 1215662 out.go:177] * Updating the running kvm2 "pause-111763" VM ...
	I0407 13:38:12.867268 1215662 machine.go:93] provisionDockerMachine start ...
	I0407 13:38:12.867310 1215662 main.go:141] libmachine: (pause-111763) Calling .DriverName
	I0407 13:38:12.867709 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHHostname
	I0407 13:38:12.872753 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:12.873448 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:12.873491 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:12.873757 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHPort
	I0407 13:38:12.873993 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:12.874207 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:12.874508 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHUsername
	I0407 13:38:12.874871 1215662 main.go:141] libmachine: Using SSH client type: native
	I0407 13:38:12.875261 1215662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.135 22 <nil> <nil>}
	I0407 13:38:12.875287 1215662 main.go:141] libmachine: About to run SSH command:
	hostname
	I0407 13:38:12.978730 1215662 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-111763
	
	I0407 13:38:12.978768 1215662 main.go:141] libmachine: (pause-111763) Calling .GetMachineName
	I0407 13:38:12.979116 1215662 buildroot.go:166] provisioning hostname "pause-111763"
	I0407 13:38:12.979144 1215662 main.go:141] libmachine: (pause-111763) Calling .GetMachineName
	I0407 13:38:12.979355 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHHostname
	I0407 13:38:12.982540 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:12.982907 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:12.982932 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:12.983051 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHPort
	I0407 13:38:12.983270 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:12.983499 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:12.983639 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHUsername
	I0407 13:38:12.983819 1215662 main.go:141] libmachine: Using SSH client type: native
	I0407 13:38:12.984097 1215662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.135 22 <nil> <nil>}
	I0407 13:38:12.984122 1215662 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-111763 && echo "pause-111763" | sudo tee /etc/hostname
	I0407 13:38:13.100478 1215662 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-111763
	
	I0407 13:38:13.100506 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHHostname
	I0407 13:38:13.103717 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:13.104189 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:13.104228 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:13.104473 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHPort
	I0407 13:38:13.104721 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:13.104978 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:13.105172 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHUsername
	I0407 13:38:13.105441 1215662 main.go:141] libmachine: Using SSH client type: native
	I0407 13:38:13.105661 1215662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.135 22 <nil> <nil>}
	I0407 13:38:13.105679 1215662 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-111763' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-111763/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-111763' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 13:38:13.215373 1215662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 13:38:13.215409 1215662 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20602-1162386/.minikube CaCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20602-1162386/.minikube}
	I0407 13:38:13.215440 1215662 buildroot.go:174] setting up certificates
	I0407 13:38:13.215455 1215662 provision.go:84] configureAuth start
	I0407 13:38:13.215466 1215662 main.go:141] libmachine: (pause-111763) Calling .GetMachineName
	I0407 13:38:13.215858 1215662 main.go:141] libmachine: (pause-111763) Calling .GetIP
	I0407 13:38:13.219032 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:13.219477 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:13.219510 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:13.219671 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHHostname
	I0407 13:38:13.222926 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:13.223484 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:13.223519 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:13.223722 1215662 provision.go:143] copyHostCerts
	I0407 13:38:13.223788 1215662 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem, removing ...
	I0407 13:38:13.223809 1215662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem
	I0407 13:38:13.223882 1215662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem (1123 bytes)
	I0407 13:38:13.223977 1215662 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem, removing ...
	I0407 13:38:13.223985 1215662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem
	I0407 13:38:13.224012 1215662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem (1675 bytes)
	I0407 13:38:13.224069 1215662 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem, removing ...
	I0407 13:38:13.224077 1215662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem
	I0407 13:38:13.224098 1215662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem (1078 bytes)
	I0407 13:38:13.224146 1215662 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem org=jenkins.pause-111763 san=[127.0.0.1 192.168.50.135 localhost minikube pause-111763]
	I0407 13:38:13.783449 1215662 provision.go:177] copyRemoteCerts
	I0407 13:38:13.783532 1215662 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 13:38:13.783560 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHHostname
	I0407 13:38:13.786663 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:13.786980 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:13.787006 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:13.787249 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHPort
	I0407 13:38:13.787577 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:13.787757 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHUsername
	I0407 13:38:13.787936 1215662 sshutil.go:53] new ssh client: &{IP:192.168.50.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/pause-111763/id_rsa Username:docker}
	I0407 13:38:13.871639 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 13:38:13.901982 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0407 13:38:13.936509 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0407 13:38:13.968492 1215662 provision.go:87] duration metric: took 753.020928ms to configureAuth
	I0407 13:38:13.968526 1215662 buildroot.go:189] setting minikube options for container-runtime
	I0407 13:38:13.968780 1215662 config.go:182] Loaded profile config "pause-111763": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:38:13.968864 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHHostname
	I0407 13:38:13.971825 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:13.972234 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:13.972272 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:13.972509 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHPort
	I0407 13:38:13.972789 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:13.973007 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:13.973226 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHUsername
	I0407 13:38:13.973414 1215662 main.go:141] libmachine: Using SSH client type: native
	I0407 13:38:13.973740 1215662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.135 22 <nil> <nil>}
	I0407 13:38:13.973763 1215662 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0407 13:38:19.577044 1215662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0407 13:38:19.577077 1215662 machine.go:96] duration metric: took 6.709784998s to provisionDockerMachine
	I0407 13:38:19.577090 1215662 start.go:293] postStartSetup for "pause-111763" (driver="kvm2")
	I0407 13:38:19.577107 1215662 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 13:38:19.577130 1215662 main.go:141] libmachine: (pause-111763) Calling .DriverName
	I0407 13:38:19.577633 1215662 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 13:38:19.577670 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHHostname
	I0407 13:38:19.581841 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:19.582319 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:19.582356 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:19.582593 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHPort
	I0407 13:38:19.582936 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:19.583194 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHUsername
	I0407 13:38:19.583393 1215662 sshutil.go:53] new ssh client: &{IP:192.168.50.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/pause-111763/id_rsa Username:docker}
	I0407 13:38:19.664859 1215662 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 13:38:19.669619 1215662 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 13:38:19.669656 1215662 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-1162386/.minikube/addons for local assets ...
	I0407 13:38:19.669758 1215662 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-1162386/.minikube/files for local assets ...
	I0407 13:38:19.669859 1215662 filesync.go:149] local asset: /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem -> 11697162.pem in /etc/ssl/certs
	I0407 13:38:19.669998 1215662 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 13:38:19.680625 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem --> /etc/ssl/certs/11697162.pem (1708 bytes)
	I0407 13:38:19.709269 1215662 start.go:296] duration metric: took 132.160546ms for postStartSetup
	I0407 13:38:19.709314 1215662 fix.go:56] duration metric: took 6.867940004s for fixHost
	I0407 13:38:19.709343 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHHostname
	I0407 13:38:19.713032 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:19.713533 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:19.713569 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:19.713767 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHPort
	I0407 13:38:19.714053 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:19.714338 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:19.714589 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHUsername
	I0407 13:38:19.714832 1215662 main.go:141] libmachine: Using SSH client type: native
	I0407 13:38:19.715051 1215662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.135 22 <nil> <nil>}
	I0407 13:38:19.715064 1215662 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0407 13:38:19.823753 1215662 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744033099.814927524
	
	I0407 13:38:19.823786 1215662 fix.go:216] guest clock: 1744033099.814927524
	I0407 13:38:19.823798 1215662 fix.go:229] Guest: 2025-04-07 13:38:19.814927524 +0000 UTC Remote: 2025-04-07 13:38:19.709319613 +0000 UTC m=+7.045075644 (delta=105.607911ms)
	I0407 13:38:19.823828 1215662 fix.go:200] guest clock delta is within tolerance: 105.607911ms
	I0407 13:38:19.823835 1215662 start.go:83] releasing machines lock for "pause-111763", held for 6.982480025s
	I0407 13:38:19.823860 1215662 main.go:141] libmachine: (pause-111763) Calling .DriverName
	I0407 13:38:19.824214 1215662 main.go:141] libmachine: (pause-111763) Calling .GetIP
	I0407 13:38:19.828076 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:19.828644 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:19.828700 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:19.829126 1215662 main.go:141] libmachine: (pause-111763) Calling .DriverName
	I0407 13:38:19.829968 1215662 main.go:141] libmachine: (pause-111763) Calling .DriverName
	I0407 13:38:19.830223 1215662 main.go:141] libmachine: (pause-111763) Calling .DriverName
	I0407 13:38:19.830339 1215662 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0407 13:38:19.830401 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHHostname
	I0407 13:38:19.830533 1215662 ssh_runner.go:195] Run: cat /version.json
	I0407 13:38:19.830557 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHHostname
	I0407 13:38:19.834202 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:19.834241 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:19.834706 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:19.834742 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:19.834769 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:19.834784 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:19.835040 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHPort
	I0407 13:38:19.835165 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHPort
	I0407 13:38:19.835305 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:19.835398 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:19.835484 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHUsername
	I0407 13:38:19.835547 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHUsername
	I0407 13:38:19.835630 1215662 sshutil.go:53] new ssh client: &{IP:192.168.50.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/pause-111763/id_rsa Username:docker}
	I0407 13:38:19.836008 1215662 sshutil.go:53] new ssh client: &{IP:192.168.50.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/pause-111763/id_rsa Username:docker}
	I0407 13:38:19.940480 1215662 ssh_runner.go:195] Run: systemctl --version
	I0407 13:38:19.947496 1215662 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0407 13:38:20.107630 1215662 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0407 13:38:20.115827 1215662 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 13:38:20.115931 1215662 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 13:38:20.127442 1215662 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0407 13:38:20.127504 1215662 start.go:495] detecting cgroup driver to use...
	I0407 13:38:20.127588 1215662 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 13:38:20.150765 1215662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 13:38:20.168683 1215662 docker.go:217] disabling cri-docker service (if available) ...
	I0407 13:38:20.168784 1215662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0407 13:38:20.186228 1215662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0407 13:38:20.205015 1215662 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0407 13:38:20.365027 1215662 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0407 13:38:20.553245 1215662 docker.go:233] disabling docker service ...
	I0407 13:38:20.553328 1215662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0407 13:38:20.576783 1215662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0407 13:38:20.594397 1215662 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0407 13:38:20.740611 1215662 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0407 13:38:20.877899 1215662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0407 13:38:20.894874 1215662 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 13:38:20.916346 1215662 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0407 13:38:20.916424 1215662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:38:20.931160 1215662 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0407 13:38:20.931241 1215662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:38:20.944392 1215662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:38:20.958586 1215662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:38:20.971927 1215662 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 13:38:20.983810 1215662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:38:20.997486 1215662 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:38:21.011799 1215662 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:38:21.025083 1215662 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 13:38:21.037026 1215662 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 13:38:21.049000 1215662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:38:21.196504 1215662 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0407 13:38:21.458511 1215662 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0407 13:38:21.458603 1215662 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0407 13:38:21.463966 1215662 start.go:563] Will wait 60s for crictl version
	I0407 13:38:21.464048 1215662 ssh_runner.go:195] Run: which crictl
	I0407 13:38:21.468284 1215662 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 13:38:21.509041 1215662 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0407 13:38:21.509181 1215662 ssh_runner.go:195] Run: crio --version
	I0407 13:38:21.544229 1215662 ssh_runner.go:195] Run: crio --version
	I0407 13:38:21.580336 1215662 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0407 13:38:21.581865 1215662 main.go:141] libmachine: (pause-111763) Calling .GetIP
	I0407 13:38:21.585667 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:21.586228 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:21.586263 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:21.586557 1215662 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0407 13:38:21.592056 1215662 kubeadm.go:883] updating cluster {Name:pause-111763 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pause-111763 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.135 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-secu
rity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 13:38:21.592237 1215662 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 13:38:21.592290 1215662 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 13:38:21.643329 1215662 crio.go:514] all images are preloaded for cri-o runtime.
	I0407 13:38:21.643354 1215662 crio.go:433] Images already preloaded, skipping extraction
	I0407 13:38:21.643412 1215662 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 13:38:21.681481 1215662 crio.go:514] all images are preloaded for cri-o runtime.
	I0407 13:38:21.681519 1215662 cache_images.go:84] Images are preloaded, skipping loading
	I0407 13:38:21.681531 1215662 kubeadm.go:934] updating node { 192.168.50.135 8443 v1.32.2 crio true true} ...
	I0407 13:38:21.681693 1215662 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-111763 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.135
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:pause-111763 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 13:38:21.681812 1215662 ssh_runner.go:195] Run: crio config
	I0407 13:38:21.769215 1215662 cni.go:84] Creating CNI manager for ""
	I0407 13:38:21.769286 1215662 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 13:38:21.769304 1215662 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0407 13:38:21.769335 1215662 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.135 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-111763 NodeName:pause-111763 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.135"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.135 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0407 13:38:21.769515 1215662 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.135
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-111763"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.135"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.135"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 13:38:21.769674 1215662 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0407 13:38:21.801098 1215662 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 13:38:21.801203 1215662 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 13:38:21.816962 1215662 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0407 13:38:21.837283 1215662 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 13:38:21.944712 1215662 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0407 13:38:22.025517 1215662 ssh_runner.go:195] Run: grep 192.168.50.135	control-plane.minikube.internal$ /etc/hosts
	I0407 13:38:22.032832 1215662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:38:22.344255 1215662 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:38:22.422193 1215662 certs.go:68] Setting up /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/pause-111763 for IP: 192.168.50.135
	I0407 13:38:22.422223 1215662 certs.go:194] generating shared ca certs ...
	I0407 13:38:22.422258 1215662 certs.go:226] acquiring lock for ca certs: {Name:mk8e89191fca7f2111bdd08c345368f593b0d5d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:38:22.422458 1215662 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.key
	I0407 13:38:22.422532 1215662 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.key
	I0407 13:38:22.422577 1215662 certs.go:256] generating profile certs ...
	I0407 13:38:22.422706 1215662 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/pause-111763/client.key
	I0407 13:38:22.422794 1215662 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/pause-111763/apiserver.key.12705a14
	I0407 13:38:22.422855 1215662 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/pause-111763/proxy-client.key
	I0407 13:38:22.423026 1215662 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716.pem (1338 bytes)
	W0407 13:38:22.423071 1215662 certs.go:480] ignoring /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716_empty.pem, impossibly tiny 0 bytes
	I0407 13:38:22.423081 1215662 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem (1675 bytes)
	I0407 13:38:22.423120 1215662 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem (1078 bytes)
	I0407 13:38:22.423151 1215662 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem (1123 bytes)
	I0407 13:38:22.423181 1215662 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem (1675 bytes)
	I0407 13:38:22.423273 1215662 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem (1708 bytes)
	I0407 13:38:22.424104 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 13:38:22.548050 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 13:38:22.633918 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 13:38:22.746116 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0407 13:38:22.848444 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/pause-111763/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0407 13:38:22.964506 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/pause-111763/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0407 13:38:23.029616 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/pause-111763/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 13:38:23.070863 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/pause-111763/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0407 13:38:23.120929 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 13:38:23.166804 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716.pem --> /usr/share/ca-certificates/1169716.pem (1338 bytes)
	I0407 13:38:23.217442 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem --> /usr/share/ca-certificates/11697162.pem (1708 bytes)
	I0407 13:38:23.266209 1215662 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 13:38:23.303454 1215662 ssh_runner.go:195] Run: openssl version
	I0407 13:38:23.322699 1215662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 13:38:23.348173 1215662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:38:23.357410 1215662 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:14 /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:38:23.357495 1215662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:38:23.378370 1215662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 13:38:23.398674 1215662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1169716.pem && ln -fs /usr/share/ca-certificates/1169716.pem /etc/ssl/certs/1169716.pem"
	I0407 13:38:23.425964 1215662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1169716.pem
	I0407 13:38:23.439064 1215662 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 12:22 /usr/share/ca-certificates/1169716.pem
	I0407 13:38:23.439154 1215662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1169716.pem
	I0407 13:38:23.470410 1215662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1169716.pem /etc/ssl/certs/51391683.0"
	I0407 13:38:23.496142 1215662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11697162.pem && ln -fs /usr/share/ca-certificates/11697162.pem /etc/ssl/certs/11697162.pem"
	I0407 13:38:23.520565 1215662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11697162.pem
	I0407 13:38:23.532477 1215662 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 12:22 /usr/share/ca-certificates/11697162.pem
	I0407 13:38:23.532559 1215662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11697162.pem
	I0407 13:38:23.542502 1215662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11697162.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 13:38:23.567739 1215662 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 13:38:23.575186 1215662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0407 13:38:23.589372 1215662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0407 13:38:23.599661 1215662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0407 13:38:23.609524 1215662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0407 13:38:23.619994 1215662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0407 13:38:23.632393 1215662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0407 13:38:23.639390 1215662 kubeadm.go:392] StartCluster: {Name:pause-111763 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pause-111763 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.135 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-securit
y-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:38:23.639552 1215662 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0407 13:38:23.639620 1215662 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0407 13:38:23.726396 1215662 cri.go:89] found id: "5810b858b178854318d9f75335b6fbfe66d2877cd1c9a3b05feb45efc1f269bf"
	I0407 13:38:23.726432 1215662 cri.go:89] found id: "c07cdd2b6b317cb6a72baa68228fcebdf5c9ea28202d01df5b8f10d30cc41cc7"
	I0407 13:38:23.726438 1215662 cri.go:89] found id: "2fe876f74127fa8f6bde0d8368f91d5569eedbef6f1a21083f63b9b96193a6f5"
	I0407 13:38:23.726443 1215662 cri.go:89] found id: "7eed44592009903caf00bff1724f6970ed877b199d1d3d2f6be2073d585ba6bc"
	I0407 13:38:23.726447 1215662 cri.go:89] found id: "6f44aca5b3b4338727c1cd156ccabe024868350e7ba51a714a9fd79b952a60b4"
	I0407 13:38:23.726452 1215662 cri.go:89] found id: "bb00dc1832f25b934ca5d16864198cb029f701ab499845a4a2a2a29cd431b4e3"
	I0407 13:38:23.726456 1215662 cri.go:89] found id: "67fa1096fcd3ea1582575a7ad88facc5d40891b4ce4bdfc09ca8640a410969e3"
	I0407 13:38:23.726459 1215662 cri.go:89] found id: "e1252cc8e7eec024319de723d3f8a2557c52be660e46c095f9ea76af09c96331"
	I0407 13:38:23.726463 1215662 cri.go:89] found id: "180d98c8602766e97df035ab78eb3a4d3424f59553aee8a5d192706cd186af28"
	I0407 13:38:23.726471 1215662 cri.go:89] found id: "b9afe851415fcb11215dbfcb715d5b3820bac9d1a65806a07f17ff5785b040f1"
	I0407 13:38:23.726475 1215662 cri.go:89] found id: "b2a1ba832cc573354c0a61f3a3bc63b52e50468adf55fd01288a78d9e6b3f04c"
	I0407 13:38:23.726478 1215662 cri.go:89] found id: "3f15fd98836e3a799770112d2e9b005ea681584c46ad08db234624b9233805b7"
	I0407 13:38:23.726483 1215662 cri.go:89] found id: ""
	I0407 13:38:23.726547 1215662 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-111763 -n pause-111763
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-111763 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-111763 logs -n 25: (1.798213657s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p NoKubernetes-027070                                 | NoKubernetes-027070          | jenkins | v1.35.0 | 07 Apr 25 13:32 UTC | 07 Apr 25 13:32 UTC |
	| start   | -p cert-options-919040                                 | cert-options-919040          | jenkins | v1.35.0 | 07 Apr 25 13:32 UTC | 07 Apr 25 13:33 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                              |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                              |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                              |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                              |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p running-upgrade-046238                              | running-upgrade-046238       | jenkins | v1.35.0 | 07 Apr 25 13:32 UTC | 07 Apr 25 13:32 UTC |
	| start   | -p no-preload-028452                                   | no-preload-028452            | jenkins | v1.35.0 | 07 Apr 25 13:32 UTC | 07 Apr 25 13:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-435730        | old-k8s-version-435730       | jenkins | v1.35.0 | 07 Apr 25 13:33 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| ssh     | cert-options-919040 ssh                                | cert-options-919040          | jenkins | v1.35.0 | 07 Apr 25 13:33 UTC | 07 Apr 25 13:33 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-919040 -- sudo                         | cert-options-919040          | jenkins | v1.35.0 | 07 Apr 25 13:33 UTC | 07 Apr 25 13:33 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-919040                                 | cert-options-919040          | jenkins | v1.35.0 | 07 Apr 25 13:33 UTC | 07 Apr 25 13:33 UTC |
	| start   | -p embed-certs-931633                                  | embed-certs-931633           | jenkins | v1.35.0 | 07 Apr 25 13:33 UTC | 07 Apr 25 13:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-028452             | no-preload-028452            | jenkins | v1.35.0 | 07 Apr 25 13:34 UTC | 07 Apr 25 13:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-028452                                   | no-preload-028452            | jenkins | v1.35.0 | 07 Apr 25 13:34 UTC | 07 Apr 25 13:36 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-435730                              | old-k8s-version-435730       | jenkins | v1.35.0 | 07 Apr 25 13:34 UTC | 07 Apr 25 13:35 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-435730             | old-k8s-version-435730       | jenkins | v1.35.0 | 07 Apr 25 13:35 UTC | 07 Apr 25 13:35 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-435730                              | old-k8s-version-435730       | jenkins | v1.35.0 | 07 Apr 25 13:35 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-931633            | embed-certs-931633           | jenkins | v1.35.0 | 07 Apr 25 13:35 UTC | 07 Apr 25 13:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-931633                                  | embed-certs-931633           | jenkins | v1.35.0 | 07 Apr 25 13:35 UTC | 07 Apr 25 13:36 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-950320                              | cert-expiration-950320       | jenkins | v1.35.0 | 07 Apr 25 13:36 UTC | 07 Apr 25 13:36 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-028452                  | no-preload-028452            | jenkins | v1.35.0 | 07 Apr 25 13:36 UTC | 07 Apr 25 13:36 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-028452                                   | no-preload-028452            | jenkins | v1.35.0 | 07 Apr 25 13:36 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-931633                 | embed-certs-931633           | jenkins | v1.35.0 | 07 Apr 25 13:36 UTC | 07 Apr 25 13:36 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-931633                                  | embed-certs-931633           | jenkins | v1.35.0 | 07 Apr 25 13:36 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-950320                              | cert-expiration-950320       | jenkins | v1.35.0 | 07 Apr 25 13:36 UTC | 07 Apr 25 13:36 UTC |
	| delete  | -p                                                     | disable-driver-mounts-696615 | jenkins | v1.35.0 | 07 Apr 25 13:36 UTC | 07 Apr 25 13:36 UTC |
	|         | disable-driver-mounts-696615                           |                              |         |         |                     |                     |
	| start   | -p pause-111763 --memory=2048                          | pause-111763                 | jenkins | v1.35.0 | 07 Apr 25 13:36 UTC | 07 Apr 25 13:38 UTC |
	|         | --install-addons=false                                 |                              |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                               |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p pause-111763                                        | pause-111763                 | jenkins | v1.35.0 | 07 Apr 25 13:38 UTC | 07 Apr 25 13:39 UTC |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 13:38:12
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 13:38:12.710152 1215662 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:38:12.710725 1215662 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:38:12.710740 1215662 out.go:358] Setting ErrFile to fd 2...
	I0407 13:38:12.710746 1215662 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:38:12.711005 1215662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1162386/.minikube/bin
	I0407 13:38:12.711660 1215662 out.go:352] Setting JSON to false
	I0407 13:38:12.712905 1215662 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":19237,"bootTime":1744013856,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 13:38:12.713058 1215662 start.go:139] virtualization: kvm guest
	I0407 13:38:12.715805 1215662 out.go:177] * [pause-111763] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 13:38:12.718419 1215662 out.go:177]   - MINIKUBE_LOCATION=20602
	I0407 13:38:12.718416 1215662 notify.go:220] Checking for updates...
	I0407 13:38:12.720692 1215662 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 13:38:12.722446 1215662 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20602-1162386/kubeconfig
	I0407 13:38:12.724231 1215662 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1162386/.minikube
	I0407 13:38:12.726029 1215662 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 13:38:12.728023 1215662 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 13:38:12.730279 1215662 config.go:182] Loaded profile config "pause-111763": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:38:12.730845 1215662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:38:12.730963 1215662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:38:12.749257 1215662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34461
	I0407 13:38:12.749865 1215662 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:38:12.750543 1215662 main.go:141] libmachine: Using API Version  1
	I0407 13:38:12.750580 1215662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:38:12.751100 1215662 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:38:12.751351 1215662 main.go:141] libmachine: (pause-111763) Calling .DriverName
	I0407 13:38:12.751650 1215662 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 13:38:12.752062 1215662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:38:12.752116 1215662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:38:12.770118 1215662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33017
	I0407 13:38:12.770628 1215662 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:38:12.771181 1215662 main.go:141] libmachine: Using API Version  1
	I0407 13:38:12.771210 1215662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:38:12.771638 1215662 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:38:12.771850 1215662 main.go:141] libmachine: (pause-111763) Calling .DriverName
	I0407 13:38:12.815130 1215662 out.go:177] * Using the kvm2 driver based on existing profile
	I0407 13:38:12.816745 1215662 start.go:297] selected driver: kvm2
	I0407 13:38:12.816770 1215662 start.go:901] validating driver "kvm2" against &{Name:pause-111763 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pa
use-111763 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.135 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm
:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:38:12.816913 1215662 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 13:38:12.817352 1215662 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 13:38:12.817464 1215662 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20602-1162386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0407 13:38:12.835337 1215662 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0407 13:38:12.836313 1215662 cni.go:84] Creating CNI manager for ""
	I0407 13:38:12.836389 1215662 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 13:38:12.836477 1215662 start.go:340] cluster config:
	{Name:pause-111763 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pause-111763 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.135 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-alias
es:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:38:12.836671 1215662 iso.go:125] acquiring lock: {Name:mk51e1827709f7a3810dbd898083f8185ece65eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 13:38:12.839010 1215662 out.go:177] * Starting "pause-111763" primary control-plane node in "pause-111763" cluster
	I0407 13:38:12.840629 1215662 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 13:38:12.840698 1215662 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0407 13:38:12.840712 1215662 cache.go:56] Caching tarball of preloaded images
	I0407 13:38:12.840821 1215662 preload.go:172] Found /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0407 13:38:12.840839 1215662 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0407 13:38:12.841035 1215662 profile.go:143] Saving config to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/pause-111763/config.json ...
	I0407 13:38:12.841289 1215662 start.go:360] acquireMachinesLock for pause-111763: {Name:mk51d4c744fa92d56cf6ac11b1e792c85ef6709a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 13:38:12.841342 1215662 start.go:364] duration metric: took 30.538µs to acquireMachinesLock for "pause-111763"
	I0407 13:38:12.841364 1215662 start.go:96] Skipping create...Using existing machine configuration
	I0407 13:38:12.841373 1215662 fix.go:54] fixHost starting: 
	I0407 13:38:12.841694 1215662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:38:12.841751 1215662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:38:12.858568 1215662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46103
	I0407 13:38:12.859080 1215662 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:38:12.859632 1215662 main.go:141] libmachine: Using API Version  1
	I0407 13:38:12.859650 1215662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:38:12.860076 1215662 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:38:12.860356 1215662 main.go:141] libmachine: (pause-111763) Calling .DriverName
	I0407 13:38:12.860590 1215662 main.go:141] libmachine: (pause-111763) Calling .GetState
	I0407 13:38:12.862792 1215662 fix.go:112] recreateIfNeeded on pause-111763: state=Running err=<nil>
	W0407 13:38:12.862823 1215662 fix.go:138] unexpected machine state, will restart: <nil>
	I0407 13:38:12.865411 1215662 out.go:177] * Updating the running kvm2 "pause-111763" VM ...
	I0407 13:38:11.380309 1214596 pod_ready.go:103] pod "metrics-server-f79f97bbb-nwxq2" in "kube-system" namespace has status "Ready":"False"
	I0407 13:38:13.877384 1214596 pod_ready.go:103] pod "metrics-server-f79f97bbb-nwxq2" in "kube-system" namespace has status "Ready":"False"
	I0407 13:38:12.553096 1214786 pod_ready.go:103] pod "metrics-server-f79f97bbb-vntf4" in "kube-system" namespace has status "Ready":"False"
	I0407 13:38:15.052537 1214786 pod_ready.go:103] pod "metrics-server-f79f97bbb-vntf4" in "kube-system" namespace has status "Ready":"False"
	I0407 13:38:13.804758 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:38:13.818792 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:38:13.818873 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:38:13.855066 1213906 cri.go:89] found id: ""
	I0407 13:38:13.855101 1213906 logs.go:282] 0 containers: []
	W0407 13:38:13.855111 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:38:13.855118 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:38:13.855177 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:38:13.892476 1213906 cri.go:89] found id: ""
	I0407 13:38:13.892508 1213906 logs.go:282] 0 containers: []
	W0407 13:38:13.892519 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:38:13.892527 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:38:13.892595 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:38:13.927175 1213906 cri.go:89] found id: ""
	I0407 13:38:13.927208 1213906 logs.go:282] 0 containers: []
	W0407 13:38:13.927217 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:38:13.927224 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:38:13.927312 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:38:13.971556 1213906 cri.go:89] found id: ""
	I0407 13:38:13.971581 1213906 logs.go:282] 0 containers: []
	W0407 13:38:13.971591 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:38:13.971599 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:38:13.971662 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:38:14.011793 1213906 cri.go:89] found id: ""
	I0407 13:38:14.011824 1213906 logs.go:282] 0 containers: []
	W0407 13:38:14.011835 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:38:14.011843 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:38:14.011925 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:38:14.050493 1213906 cri.go:89] found id: ""
	I0407 13:38:14.050527 1213906 logs.go:282] 0 containers: []
	W0407 13:38:14.050538 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:38:14.050547 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:38:14.050617 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:38:14.085673 1213906 cri.go:89] found id: ""
	I0407 13:38:14.085724 1213906 logs.go:282] 0 containers: []
	W0407 13:38:14.085737 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:38:14.085746 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:38:14.085812 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:38:14.131856 1213906 cri.go:89] found id: ""
	I0407 13:38:14.131893 1213906 logs.go:282] 0 containers: []
	W0407 13:38:14.131906 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:38:14.131920 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:38:14.131937 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:38:14.185085 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:38:14.185138 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:38:14.199586 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:38:14.199625 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:38:14.277571 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:38:14.277604 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:38:14.277624 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:38:14.353802 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:38:14.353859 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:38:16.895403 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:38:16.909675 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:38:16.909846 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:38:16.945406 1213906 cri.go:89] found id: ""
	I0407 13:38:16.945455 1213906 logs.go:282] 0 containers: []
	W0407 13:38:16.945484 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:38:16.945494 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:38:16.945574 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:38:16.983588 1213906 cri.go:89] found id: ""
	I0407 13:38:16.983626 1213906 logs.go:282] 0 containers: []
	W0407 13:38:16.983638 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:38:16.983647 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:38:16.983717 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:38:17.020444 1213906 cri.go:89] found id: ""
	I0407 13:38:17.020487 1213906 logs.go:282] 0 containers: []
	W0407 13:38:17.020501 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:38:17.020510 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:38:17.020593 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:38:17.060614 1213906 cri.go:89] found id: ""
	I0407 13:38:17.060657 1213906 logs.go:282] 0 containers: []
	W0407 13:38:17.060669 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:38:17.060678 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:38:17.060762 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:38:17.105096 1213906 cri.go:89] found id: ""
	I0407 13:38:17.105136 1213906 logs.go:282] 0 containers: []
	W0407 13:38:17.105148 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:38:17.105156 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:38:17.105237 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:38:17.144101 1213906 cri.go:89] found id: ""
	I0407 13:38:17.144140 1213906 logs.go:282] 0 containers: []
	W0407 13:38:17.144156 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:38:17.144166 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:38:17.144242 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:38:17.190569 1213906 cri.go:89] found id: ""
	I0407 13:38:17.190602 1213906 logs.go:282] 0 containers: []
	W0407 13:38:17.190613 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:38:17.190621 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:38:17.190693 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:38:17.233997 1213906 cri.go:89] found id: ""
	I0407 13:38:17.234030 1213906 logs.go:282] 0 containers: []
	W0407 13:38:17.234039 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:38:17.234051 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:38:17.234065 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:38:17.321443 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:38:17.321495 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:38:17.370755 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:38:17.370794 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:38:17.429210 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:38:17.429268 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:38:17.444684 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:38:17.444722 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:38:17.522630 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:38:12.867268 1215662 machine.go:93] provisionDockerMachine start ...
	I0407 13:38:12.867310 1215662 main.go:141] libmachine: (pause-111763) Calling .DriverName
	I0407 13:38:12.867709 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHHostname
	I0407 13:38:12.872753 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:12.873448 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:12.873491 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:12.873757 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHPort
	I0407 13:38:12.873993 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:12.874207 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:12.874508 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHUsername
	I0407 13:38:12.874871 1215662 main.go:141] libmachine: Using SSH client type: native
	I0407 13:38:12.875261 1215662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.135 22 <nil> <nil>}
	I0407 13:38:12.875287 1215662 main.go:141] libmachine: About to run SSH command:
	hostname
	I0407 13:38:12.978730 1215662 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-111763
	
	I0407 13:38:12.978768 1215662 main.go:141] libmachine: (pause-111763) Calling .GetMachineName
	I0407 13:38:12.979116 1215662 buildroot.go:166] provisioning hostname "pause-111763"
	I0407 13:38:12.979144 1215662 main.go:141] libmachine: (pause-111763) Calling .GetMachineName
	I0407 13:38:12.979355 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHHostname
	I0407 13:38:12.982540 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:12.982907 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:12.982932 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:12.983051 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHPort
	I0407 13:38:12.983270 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:12.983499 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:12.983639 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHUsername
	I0407 13:38:12.983819 1215662 main.go:141] libmachine: Using SSH client type: native
	I0407 13:38:12.984097 1215662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.135 22 <nil> <nil>}
	I0407 13:38:12.984122 1215662 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-111763 && echo "pause-111763" | sudo tee /etc/hostname
	I0407 13:38:13.100478 1215662 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-111763
	
	I0407 13:38:13.100506 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHHostname
	I0407 13:38:13.103717 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:13.104189 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:13.104228 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:13.104473 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHPort
	I0407 13:38:13.104721 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:13.104978 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:13.105172 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHUsername
	I0407 13:38:13.105441 1215662 main.go:141] libmachine: Using SSH client type: native
	I0407 13:38:13.105661 1215662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.135 22 <nil> <nil>}
	I0407 13:38:13.105679 1215662 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-111763' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-111763/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-111763' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 13:38:13.215373 1215662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 13:38:13.215409 1215662 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20602-1162386/.minikube CaCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20602-1162386/.minikube}
	I0407 13:38:13.215440 1215662 buildroot.go:174] setting up certificates
	I0407 13:38:13.215455 1215662 provision.go:84] configureAuth start
	I0407 13:38:13.215466 1215662 main.go:141] libmachine: (pause-111763) Calling .GetMachineName
	I0407 13:38:13.215858 1215662 main.go:141] libmachine: (pause-111763) Calling .GetIP
	I0407 13:38:13.219032 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:13.219477 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:13.219510 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:13.219671 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHHostname
	I0407 13:38:13.222926 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:13.223484 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:13.223519 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:13.223722 1215662 provision.go:143] copyHostCerts
	I0407 13:38:13.223788 1215662 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem, removing ...
	I0407 13:38:13.223809 1215662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem
	I0407 13:38:13.223882 1215662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem (1123 bytes)
	I0407 13:38:13.223977 1215662 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem, removing ...
	I0407 13:38:13.223985 1215662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem
	I0407 13:38:13.224012 1215662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem (1675 bytes)
	I0407 13:38:13.224069 1215662 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem, removing ...
	I0407 13:38:13.224077 1215662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem
	I0407 13:38:13.224098 1215662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem (1078 bytes)
	I0407 13:38:13.224146 1215662 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem org=jenkins.pause-111763 san=[127.0.0.1 192.168.50.135 localhost minikube pause-111763]
	I0407 13:38:13.783449 1215662 provision.go:177] copyRemoteCerts
	I0407 13:38:13.783532 1215662 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 13:38:13.783560 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHHostname
	I0407 13:38:13.786663 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:13.786980 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:13.787006 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:13.787249 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHPort
	I0407 13:38:13.787577 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:13.787757 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHUsername
	I0407 13:38:13.787936 1215662 sshutil.go:53] new ssh client: &{IP:192.168.50.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/pause-111763/id_rsa Username:docker}
	I0407 13:38:13.871639 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 13:38:13.901982 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0407 13:38:13.936509 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0407 13:38:13.968492 1215662 provision.go:87] duration metric: took 753.020928ms to configureAuth
	I0407 13:38:13.968526 1215662 buildroot.go:189] setting minikube options for container-runtime
	I0407 13:38:13.968780 1215662 config.go:182] Loaded profile config "pause-111763": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:38:13.968864 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHHostname
	I0407 13:38:13.971825 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:13.972234 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:13.972272 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:13.972509 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHPort
	I0407 13:38:13.972789 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:13.973007 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:13.973226 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHUsername
	I0407 13:38:13.973414 1215662 main.go:141] libmachine: Using SSH client type: native
	I0407 13:38:13.973740 1215662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.135 22 <nil> <nil>}
	I0407 13:38:13.973763 1215662 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0407 13:38:15.878556 1214596 pod_ready.go:103] pod "metrics-server-f79f97bbb-nwxq2" in "kube-system" namespace has status "Ready":"False"
	I0407 13:38:18.380673 1214596 pod_ready.go:103] pod "metrics-server-f79f97bbb-nwxq2" in "kube-system" namespace has status "Ready":"False"
	I0407 13:38:17.053095 1214786 pod_ready.go:103] pod "metrics-server-f79f97bbb-vntf4" in "kube-system" namespace has status "Ready":"False"
	I0407 13:38:19.552489 1214786 pod_ready.go:103] pod "metrics-server-f79f97bbb-vntf4" in "kube-system" namespace has status "Ready":"False"
	I0407 13:38:19.577044 1215662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0407 13:38:19.577077 1215662 machine.go:96] duration metric: took 6.709784998s to provisionDockerMachine
	I0407 13:38:19.577090 1215662 start.go:293] postStartSetup for "pause-111763" (driver="kvm2")
	I0407 13:38:19.577107 1215662 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 13:38:19.577130 1215662 main.go:141] libmachine: (pause-111763) Calling .DriverName
	I0407 13:38:19.577633 1215662 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 13:38:19.577670 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHHostname
	I0407 13:38:19.581841 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:19.582319 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:19.582356 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:19.582593 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHPort
	I0407 13:38:19.582936 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:19.583194 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHUsername
	I0407 13:38:19.583393 1215662 sshutil.go:53] new ssh client: &{IP:192.168.50.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/pause-111763/id_rsa Username:docker}
	I0407 13:38:19.664859 1215662 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 13:38:19.669619 1215662 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 13:38:19.669656 1215662 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-1162386/.minikube/addons for local assets ...
	I0407 13:38:19.669758 1215662 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-1162386/.minikube/files for local assets ...
	I0407 13:38:19.669859 1215662 filesync.go:149] local asset: /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem -> 11697162.pem in /etc/ssl/certs
	I0407 13:38:19.669998 1215662 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 13:38:19.680625 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem --> /etc/ssl/certs/11697162.pem (1708 bytes)
	I0407 13:38:19.709269 1215662 start.go:296] duration metric: took 132.160546ms for postStartSetup
	I0407 13:38:19.709314 1215662 fix.go:56] duration metric: took 6.867940004s for fixHost
	I0407 13:38:19.709343 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHHostname
	I0407 13:38:19.713032 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:19.713533 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:19.713569 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:19.713767 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHPort
	I0407 13:38:19.714053 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:19.714338 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:19.714589 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHUsername
	I0407 13:38:19.714832 1215662 main.go:141] libmachine: Using SSH client type: native
	I0407 13:38:19.715051 1215662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.135 22 <nil> <nil>}
	I0407 13:38:19.715064 1215662 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0407 13:38:19.823753 1215662 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744033099.814927524
	
	I0407 13:38:19.823786 1215662 fix.go:216] guest clock: 1744033099.814927524
	I0407 13:38:19.823798 1215662 fix.go:229] Guest: 2025-04-07 13:38:19.814927524 +0000 UTC Remote: 2025-04-07 13:38:19.709319613 +0000 UTC m=+7.045075644 (delta=105.607911ms)
	I0407 13:38:19.823828 1215662 fix.go:200] guest clock delta is within tolerance: 105.607911ms
	I0407 13:38:19.823835 1215662 start.go:83] releasing machines lock for "pause-111763", held for 6.982480025s
	I0407 13:38:19.823860 1215662 main.go:141] libmachine: (pause-111763) Calling .DriverName
	I0407 13:38:19.824214 1215662 main.go:141] libmachine: (pause-111763) Calling .GetIP
	I0407 13:38:19.828076 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:19.828644 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:19.828700 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:19.829126 1215662 main.go:141] libmachine: (pause-111763) Calling .DriverName
	I0407 13:38:19.829968 1215662 main.go:141] libmachine: (pause-111763) Calling .DriverName
	I0407 13:38:19.830223 1215662 main.go:141] libmachine: (pause-111763) Calling .DriverName
	I0407 13:38:19.830339 1215662 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0407 13:38:19.830401 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHHostname
	I0407 13:38:19.830533 1215662 ssh_runner.go:195] Run: cat /version.json
	I0407 13:38:19.830557 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHHostname
	I0407 13:38:19.834202 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:19.834241 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:19.834706 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:19.834742 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:19.834769 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:19.834784 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:19.835040 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHPort
	I0407 13:38:19.835165 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHPort
	I0407 13:38:19.835305 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:19.835398 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:19.835484 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHUsername
	I0407 13:38:19.835547 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHUsername
	I0407 13:38:19.835630 1215662 sshutil.go:53] new ssh client: &{IP:192.168.50.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/pause-111763/id_rsa Username:docker}
	I0407 13:38:19.836008 1215662 sshutil.go:53] new ssh client: &{IP:192.168.50.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/pause-111763/id_rsa Username:docker}
	I0407 13:38:19.940480 1215662 ssh_runner.go:195] Run: systemctl --version
	I0407 13:38:19.947496 1215662 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0407 13:38:20.107630 1215662 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0407 13:38:20.115827 1215662 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 13:38:20.115931 1215662 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 13:38:20.127442 1215662 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0407 13:38:20.127504 1215662 start.go:495] detecting cgroup driver to use...
	I0407 13:38:20.127588 1215662 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 13:38:20.150765 1215662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 13:38:20.168683 1215662 docker.go:217] disabling cri-docker service (if available) ...
	I0407 13:38:20.168784 1215662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0407 13:38:20.186228 1215662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0407 13:38:20.205015 1215662 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0407 13:38:20.365027 1215662 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0407 13:38:20.553245 1215662 docker.go:233] disabling docker service ...
	I0407 13:38:20.553328 1215662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0407 13:38:20.576783 1215662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0407 13:38:20.594397 1215662 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0407 13:38:20.740611 1215662 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0407 13:38:20.877899 1215662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0407 13:38:20.894874 1215662 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 13:38:20.916346 1215662 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0407 13:38:20.916424 1215662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:38:20.931160 1215662 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0407 13:38:20.931241 1215662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:38:20.944392 1215662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:38:20.958586 1215662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:38:20.971927 1215662 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 13:38:20.983810 1215662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:38:20.997486 1215662 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:38:21.011799 1215662 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:38:21.025083 1215662 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 13:38:21.037026 1215662 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 13:38:21.049000 1215662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:38:21.196504 1215662 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0407 13:38:21.458511 1215662 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0407 13:38:21.458603 1215662 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0407 13:38:21.463966 1215662 start.go:563] Will wait 60s for crictl version
	I0407 13:38:21.464048 1215662 ssh_runner.go:195] Run: which crictl
	I0407 13:38:21.468284 1215662 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 13:38:21.509041 1215662 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0407 13:38:21.509181 1215662 ssh_runner.go:195] Run: crio --version
	I0407 13:38:21.544229 1215662 ssh_runner.go:195] Run: crio --version
	I0407 13:38:21.580336 1215662 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0407 13:38:20.022948 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:38:20.037136 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:38:20.037218 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:38:20.076138 1213906 cri.go:89] found id: ""
	I0407 13:38:20.076168 1213906 logs.go:282] 0 containers: []
	W0407 13:38:20.076177 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:38:20.076183 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:38:20.076254 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:38:20.116308 1213906 cri.go:89] found id: ""
	I0407 13:38:20.116347 1213906 logs.go:282] 0 containers: []
	W0407 13:38:20.116357 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:38:20.116366 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:38:20.116425 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:38:20.154226 1213906 cri.go:89] found id: ""
	I0407 13:38:20.154261 1213906 logs.go:282] 0 containers: []
	W0407 13:38:20.154286 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:38:20.154293 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:38:20.154358 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:38:20.193534 1213906 cri.go:89] found id: ""
	I0407 13:38:20.193570 1213906 logs.go:282] 0 containers: []
	W0407 13:38:20.193581 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:38:20.193590 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:38:20.193658 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:38:20.233242 1213906 cri.go:89] found id: ""
	I0407 13:38:20.233280 1213906 logs.go:282] 0 containers: []
	W0407 13:38:20.233292 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:38:20.233300 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:38:20.233379 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:38:20.273298 1213906 cri.go:89] found id: ""
	I0407 13:38:20.273340 1213906 logs.go:282] 0 containers: []
	W0407 13:38:20.273354 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:38:20.273364 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:38:20.273483 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:38:20.317495 1213906 cri.go:89] found id: ""
	I0407 13:38:20.317538 1213906 logs.go:282] 0 containers: []
	W0407 13:38:20.317548 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:38:20.317554 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:38:20.317611 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:38:20.356020 1213906 cri.go:89] found id: ""
	I0407 13:38:20.356054 1213906 logs.go:282] 0 containers: []
	W0407 13:38:20.356063 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:38:20.356074 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:38:20.356087 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:38:20.424550 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:38:20.424618 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:38:20.444415 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:38:20.444454 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:38:20.533211 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:38:20.533242 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:38:20.533274 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:38:20.635661 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:38:20.635729 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:38:21.581865 1215662 main.go:141] libmachine: (pause-111763) Calling .GetIP
	I0407 13:38:21.585667 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:21.586228 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:21.586263 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:21.586557 1215662 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0407 13:38:21.592056 1215662 kubeadm.go:883] updating cluster {Name:pause-111763 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pause-111763 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.135 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-secu
rity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 13:38:21.592237 1215662 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 13:38:21.592290 1215662 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 13:38:21.643329 1215662 crio.go:514] all images are preloaded for cri-o runtime.
	I0407 13:38:21.643354 1215662 crio.go:433] Images already preloaded, skipping extraction
	I0407 13:38:21.643412 1215662 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 13:38:21.681481 1215662 crio.go:514] all images are preloaded for cri-o runtime.
	I0407 13:38:21.681519 1215662 cache_images.go:84] Images are preloaded, skipping loading
	I0407 13:38:21.681531 1215662 kubeadm.go:934] updating node { 192.168.50.135 8443 v1.32.2 crio true true} ...
	I0407 13:38:21.681693 1215662 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-111763 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.135
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:pause-111763 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 13:38:21.681812 1215662 ssh_runner.go:195] Run: crio config
	I0407 13:38:21.769215 1215662 cni.go:84] Creating CNI manager for ""
	I0407 13:38:21.769286 1215662 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 13:38:21.769304 1215662 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0407 13:38:21.769335 1215662 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.135 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-111763 NodeName:pause-111763 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.135"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.135 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0407 13:38:21.769515 1215662 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.135
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-111763"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.135"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.135"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 13:38:21.769674 1215662 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0407 13:38:21.801098 1215662 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 13:38:21.801203 1215662 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 13:38:21.816962 1215662 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0407 13:38:21.837283 1215662 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 13:38:21.944712 1215662 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0407 13:38:22.025517 1215662 ssh_runner.go:195] Run: grep 192.168.50.135	control-plane.minikube.internal$ /etc/hosts
	I0407 13:38:22.032832 1215662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:38:22.344255 1215662 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:38:22.422193 1215662 certs.go:68] Setting up /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/pause-111763 for IP: 192.168.50.135
	I0407 13:38:22.422223 1215662 certs.go:194] generating shared ca certs ...
	I0407 13:38:22.422258 1215662 certs.go:226] acquiring lock for ca certs: {Name:mk8e89191fca7f2111bdd08c345368f593b0d5d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:38:22.422458 1215662 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.key
	I0407 13:38:22.422532 1215662 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.key
	I0407 13:38:22.422577 1215662 certs.go:256] generating profile certs ...
	I0407 13:38:22.422706 1215662 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/pause-111763/client.key
	I0407 13:38:22.422794 1215662 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/pause-111763/apiserver.key.12705a14
	I0407 13:38:22.422855 1215662 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/pause-111763/proxy-client.key
	I0407 13:38:22.423026 1215662 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716.pem (1338 bytes)
	W0407 13:38:22.423071 1215662 certs.go:480] ignoring /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716_empty.pem, impossibly tiny 0 bytes
	I0407 13:38:22.423081 1215662 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem (1675 bytes)
	I0407 13:38:22.423120 1215662 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem (1078 bytes)
	I0407 13:38:22.423151 1215662 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem (1123 bytes)
	I0407 13:38:22.423181 1215662 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem (1675 bytes)
	I0407 13:38:22.423273 1215662 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem (1708 bytes)
	I0407 13:38:22.424104 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 13:38:22.548050 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 13:38:22.633918 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 13:38:20.878291 1214596 pod_ready.go:103] pod "metrics-server-f79f97bbb-nwxq2" in "kube-system" namespace has status "Ready":"False"
	I0407 13:38:23.380633 1214596 pod_ready.go:103] pod "metrics-server-f79f97bbb-nwxq2" in "kube-system" namespace has status "Ready":"False"
	I0407 13:38:21.552936 1214786 pod_ready.go:103] pod "metrics-server-f79f97bbb-vntf4" in "kube-system" namespace has status "Ready":"False"
	I0407 13:38:23.556036 1214786 pod_ready.go:103] pod "metrics-server-f79f97bbb-vntf4" in "kube-system" namespace has status "Ready":"False"
	I0407 13:38:23.179699 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:38:23.195603 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:38:23.195701 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:38:23.247984 1213906 cri.go:89] found id: ""
	I0407 13:38:23.248021 1213906 logs.go:282] 0 containers: []
	W0407 13:38:23.248030 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:38:23.248037 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:38:23.248113 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:38:23.297330 1213906 cri.go:89] found id: ""
	I0407 13:38:23.297367 1213906 logs.go:282] 0 containers: []
	W0407 13:38:23.297380 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:38:23.297389 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:38:23.297465 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:38:23.342695 1213906 cri.go:89] found id: ""
	I0407 13:38:23.342732 1213906 logs.go:282] 0 containers: []
	W0407 13:38:23.342745 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:38:23.342754 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:38:23.342854 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:38:23.390557 1213906 cri.go:89] found id: ""
	I0407 13:38:23.390597 1213906 logs.go:282] 0 containers: []
	W0407 13:38:23.390610 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:38:23.390618 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:38:23.390693 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:38:23.436306 1213906 cri.go:89] found id: ""
	I0407 13:38:23.436431 1213906 logs.go:282] 0 containers: []
	W0407 13:38:23.436454 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:38:23.436465 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:38:23.436544 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:38:23.489592 1213906 cri.go:89] found id: ""
	I0407 13:38:23.489635 1213906 logs.go:282] 0 containers: []
	W0407 13:38:23.489647 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:38:23.489656 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:38:23.489757 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:38:23.549612 1213906 cri.go:89] found id: ""
	I0407 13:38:23.549665 1213906 logs.go:282] 0 containers: []
	W0407 13:38:23.549679 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:38:23.549688 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:38:23.549803 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:38:23.593793 1213906 cri.go:89] found id: ""
	I0407 13:38:23.593834 1213906 logs.go:282] 0 containers: []
	W0407 13:38:23.593846 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:38:23.593861 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:38:23.593882 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:38:23.613155 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:38:23.613214 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:38:23.692080 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:38:23.692115 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:38:23.692134 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:38:23.792659 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:38:23.792710 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:38:23.867830 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:38:23.867872 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:38:26.435191 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:38:26.450136 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:38:26.450228 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:38:26.486457 1213906 cri.go:89] found id: ""
	I0407 13:38:26.486498 1213906 logs.go:282] 0 containers: []
	W0407 13:38:26.486510 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:38:26.486520 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:38:26.486605 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:38:26.523604 1213906 cri.go:89] found id: ""
	I0407 13:38:26.523642 1213906 logs.go:282] 0 containers: []
	W0407 13:38:26.523655 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:38:26.523663 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:38:26.523737 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:38:26.563215 1213906 cri.go:89] found id: ""
	I0407 13:38:26.563253 1213906 logs.go:282] 0 containers: []
	W0407 13:38:26.563276 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:38:26.563284 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:38:26.563353 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:38:26.597983 1213906 cri.go:89] found id: ""
	I0407 13:38:26.598018 1213906 logs.go:282] 0 containers: []
	W0407 13:38:26.598030 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:38:26.598038 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:38:26.598111 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:38:26.636270 1213906 cri.go:89] found id: ""
	I0407 13:38:26.636304 1213906 logs.go:282] 0 containers: []
	W0407 13:38:26.636313 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:38:26.636323 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:38:26.636395 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:38:26.675668 1213906 cri.go:89] found id: ""
	I0407 13:38:26.675705 1213906 logs.go:282] 0 containers: []
	W0407 13:38:26.675717 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:38:26.675731 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:38:26.675828 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:38:26.713079 1213906 cri.go:89] found id: ""
	I0407 13:38:26.713109 1213906 logs.go:282] 0 containers: []
	W0407 13:38:26.713119 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:38:26.713126 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:38:26.713235 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:38:26.751390 1213906 cri.go:89] found id: ""
	I0407 13:38:26.751419 1213906 logs.go:282] 0 containers: []
	W0407 13:38:26.751434 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:38:26.751445 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:38:26.751457 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:38:26.792848 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:38:26.792890 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:38:26.846159 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:38:26.846214 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:38:26.860024 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:38:26.860061 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:38:26.935582 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:38:26.935610 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:38:26.935624 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:38:22.746116 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0407 13:38:22.848444 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/pause-111763/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0407 13:38:22.964506 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/pause-111763/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0407 13:38:23.029616 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/pause-111763/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 13:38:23.070863 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/pause-111763/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0407 13:38:23.120929 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 13:38:23.166804 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716.pem --> /usr/share/ca-certificates/1169716.pem (1338 bytes)
	I0407 13:38:23.217442 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem --> /usr/share/ca-certificates/11697162.pem (1708 bytes)
	I0407 13:38:23.266209 1215662 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 13:38:23.303454 1215662 ssh_runner.go:195] Run: openssl version
	I0407 13:38:23.322699 1215662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 13:38:23.348173 1215662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:38:23.357410 1215662 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:14 /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:38:23.357495 1215662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:38:23.378370 1215662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 13:38:23.398674 1215662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1169716.pem && ln -fs /usr/share/ca-certificates/1169716.pem /etc/ssl/certs/1169716.pem"
	I0407 13:38:23.425964 1215662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1169716.pem
	I0407 13:38:23.439064 1215662 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 12:22 /usr/share/ca-certificates/1169716.pem
	I0407 13:38:23.439154 1215662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1169716.pem
	I0407 13:38:23.470410 1215662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1169716.pem /etc/ssl/certs/51391683.0"
	I0407 13:38:23.496142 1215662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11697162.pem && ln -fs /usr/share/ca-certificates/11697162.pem /etc/ssl/certs/11697162.pem"
	I0407 13:38:23.520565 1215662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11697162.pem
	I0407 13:38:23.532477 1215662 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 12:22 /usr/share/ca-certificates/11697162.pem
	I0407 13:38:23.532559 1215662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11697162.pem
	I0407 13:38:23.542502 1215662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11697162.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 13:38:23.567739 1215662 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 13:38:23.575186 1215662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0407 13:38:23.589372 1215662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0407 13:38:23.599661 1215662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0407 13:38:23.609524 1215662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0407 13:38:23.619994 1215662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0407 13:38:23.632393 1215662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0407 13:38:23.639390 1215662 kubeadm.go:392] StartCluster: {Name:pause-111763 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pause-111763 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.135 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-securit
y-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:38:23.639552 1215662 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0407 13:38:23.639620 1215662 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0407 13:38:23.726396 1215662 cri.go:89] found id: "5810b858b178854318d9f75335b6fbfe66d2877cd1c9a3b05feb45efc1f269bf"
	I0407 13:38:23.726432 1215662 cri.go:89] found id: "c07cdd2b6b317cb6a72baa68228fcebdf5c9ea28202d01df5b8f10d30cc41cc7"
	I0407 13:38:23.726438 1215662 cri.go:89] found id: "2fe876f74127fa8f6bde0d8368f91d5569eedbef6f1a21083f63b9b96193a6f5"
	I0407 13:38:23.726443 1215662 cri.go:89] found id: "7eed44592009903caf00bff1724f6970ed877b199d1d3d2f6be2073d585ba6bc"
	I0407 13:38:23.726447 1215662 cri.go:89] found id: "6f44aca5b3b4338727c1cd156ccabe024868350e7ba51a714a9fd79b952a60b4"
	I0407 13:38:23.726452 1215662 cri.go:89] found id: "bb00dc1832f25b934ca5d16864198cb029f701ab499845a4a2a2a29cd431b4e3"
	I0407 13:38:23.726456 1215662 cri.go:89] found id: "67fa1096fcd3ea1582575a7ad88facc5d40891b4ce4bdfc09ca8640a410969e3"
	I0407 13:38:23.726459 1215662 cri.go:89] found id: "e1252cc8e7eec024319de723d3f8a2557c52be660e46c095f9ea76af09c96331"
	I0407 13:38:23.726463 1215662 cri.go:89] found id: "180d98c8602766e97df035ab78eb3a4d3424f59553aee8a5d192706cd186af28"
	I0407 13:38:23.726471 1215662 cri.go:89] found id: "b9afe851415fcb11215dbfcb715d5b3820bac9d1a65806a07f17ff5785b040f1"
	I0407 13:38:23.726475 1215662 cri.go:89] found id: "b2a1ba832cc573354c0a61f3a3bc63b52e50468adf55fd01288a78d9e6b3f04c"
	I0407 13:38:23.726478 1215662 cri.go:89] found id: "3f15fd98836e3a799770112d2e9b005ea681584c46ad08db234624b9233805b7"
	I0407 13:38:23.726483 1215662 cri.go:89] found id: ""
	I0407 13:38:23.726547 1215662 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-111763 -n pause-111763
helpers_test.go:261: (dbg) Run:  kubectl --context pause-111763 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-111763 -n pause-111763
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-111763 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-111763 logs -n 25: (1.627993995s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p NoKubernetes-027070                                 | NoKubernetes-027070          | jenkins | v1.35.0 | 07 Apr 25 13:32 UTC | 07 Apr 25 13:32 UTC |
	| start   | -p cert-options-919040                                 | cert-options-919040          | jenkins | v1.35.0 | 07 Apr 25 13:32 UTC | 07 Apr 25 13:33 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                              |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                              |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                              |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                              |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p running-upgrade-046238                              | running-upgrade-046238       | jenkins | v1.35.0 | 07 Apr 25 13:32 UTC | 07 Apr 25 13:32 UTC |
	| start   | -p no-preload-028452                                   | no-preload-028452            | jenkins | v1.35.0 | 07 Apr 25 13:32 UTC | 07 Apr 25 13:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-435730        | old-k8s-version-435730       | jenkins | v1.35.0 | 07 Apr 25 13:33 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| ssh     | cert-options-919040 ssh                                | cert-options-919040          | jenkins | v1.35.0 | 07 Apr 25 13:33 UTC | 07 Apr 25 13:33 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-919040 -- sudo                         | cert-options-919040          | jenkins | v1.35.0 | 07 Apr 25 13:33 UTC | 07 Apr 25 13:33 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-919040                                 | cert-options-919040          | jenkins | v1.35.0 | 07 Apr 25 13:33 UTC | 07 Apr 25 13:33 UTC |
	| start   | -p embed-certs-931633                                  | embed-certs-931633           | jenkins | v1.35.0 | 07 Apr 25 13:33 UTC | 07 Apr 25 13:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-028452             | no-preload-028452            | jenkins | v1.35.0 | 07 Apr 25 13:34 UTC | 07 Apr 25 13:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-028452                                   | no-preload-028452            | jenkins | v1.35.0 | 07 Apr 25 13:34 UTC | 07 Apr 25 13:36 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-435730                              | old-k8s-version-435730       | jenkins | v1.35.0 | 07 Apr 25 13:34 UTC | 07 Apr 25 13:35 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-435730             | old-k8s-version-435730       | jenkins | v1.35.0 | 07 Apr 25 13:35 UTC | 07 Apr 25 13:35 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-435730                              | old-k8s-version-435730       | jenkins | v1.35.0 | 07 Apr 25 13:35 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-931633            | embed-certs-931633           | jenkins | v1.35.0 | 07 Apr 25 13:35 UTC | 07 Apr 25 13:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-931633                                  | embed-certs-931633           | jenkins | v1.35.0 | 07 Apr 25 13:35 UTC | 07 Apr 25 13:36 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-950320                              | cert-expiration-950320       | jenkins | v1.35.0 | 07 Apr 25 13:36 UTC | 07 Apr 25 13:36 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-028452                  | no-preload-028452            | jenkins | v1.35.0 | 07 Apr 25 13:36 UTC | 07 Apr 25 13:36 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-028452                                   | no-preload-028452            | jenkins | v1.35.0 | 07 Apr 25 13:36 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-931633                 | embed-certs-931633           | jenkins | v1.35.0 | 07 Apr 25 13:36 UTC | 07 Apr 25 13:36 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-931633                                  | embed-certs-931633           | jenkins | v1.35.0 | 07 Apr 25 13:36 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-950320                              | cert-expiration-950320       | jenkins | v1.35.0 | 07 Apr 25 13:36 UTC | 07 Apr 25 13:36 UTC |
	| delete  | -p                                                     | disable-driver-mounts-696615 | jenkins | v1.35.0 | 07 Apr 25 13:36 UTC | 07 Apr 25 13:36 UTC |
	|         | disable-driver-mounts-696615                           |                              |         |         |                     |                     |
	| start   | -p pause-111763 --memory=2048                          | pause-111763                 | jenkins | v1.35.0 | 07 Apr 25 13:36 UTC | 07 Apr 25 13:38 UTC |
	|         | --install-addons=false                                 |                              |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                               |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p pause-111763                                        | pause-111763                 | jenkins | v1.35.0 | 07 Apr 25 13:38 UTC | 07 Apr 25 13:39 UTC |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 13:38:12
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 13:38:12.710152 1215662 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:38:12.710725 1215662 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:38:12.710740 1215662 out.go:358] Setting ErrFile to fd 2...
	I0407 13:38:12.710746 1215662 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:38:12.711005 1215662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1162386/.minikube/bin
	I0407 13:38:12.711660 1215662 out.go:352] Setting JSON to false
	I0407 13:38:12.712905 1215662 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":19237,"bootTime":1744013856,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 13:38:12.713058 1215662 start.go:139] virtualization: kvm guest
	I0407 13:38:12.715805 1215662 out.go:177] * [pause-111763] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 13:38:12.718419 1215662 out.go:177]   - MINIKUBE_LOCATION=20602
	I0407 13:38:12.718416 1215662 notify.go:220] Checking for updates...
	I0407 13:38:12.720692 1215662 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 13:38:12.722446 1215662 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20602-1162386/kubeconfig
	I0407 13:38:12.724231 1215662 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1162386/.minikube
	I0407 13:38:12.726029 1215662 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 13:38:12.728023 1215662 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 13:38:12.730279 1215662 config.go:182] Loaded profile config "pause-111763": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:38:12.730845 1215662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:38:12.730963 1215662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:38:12.749257 1215662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34461
	I0407 13:38:12.749865 1215662 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:38:12.750543 1215662 main.go:141] libmachine: Using API Version  1
	I0407 13:38:12.750580 1215662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:38:12.751100 1215662 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:38:12.751351 1215662 main.go:141] libmachine: (pause-111763) Calling .DriverName
	I0407 13:38:12.751650 1215662 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 13:38:12.752062 1215662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:38:12.752116 1215662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:38:12.770118 1215662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33017
	I0407 13:38:12.770628 1215662 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:38:12.771181 1215662 main.go:141] libmachine: Using API Version  1
	I0407 13:38:12.771210 1215662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:38:12.771638 1215662 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:38:12.771850 1215662 main.go:141] libmachine: (pause-111763) Calling .DriverName
	I0407 13:38:12.815130 1215662 out.go:177] * Using the kvm2 driver based on existing profile
	I0407 13:38:12.816745 1215662 start.go:297] selected driver: kvm2
	I0407 13:38:12.816770 1215662 start.go:901] validating driver "kvm2" against &{Name:pause-111763 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pa
use-111763 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.135 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm
:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:38:12.816913 1215662 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 13:38:12.817352 1215662 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 13:38:12.817464 1215662 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20602-1162386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0407 13:38:12.835337 1215662 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0407 13:38:12.836313 1215662 cni.go:84] Creating CNI manager for ""
	I0407 13:38:12.836389 1215662 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 13:38:12.836477 1215662 start.go:340] cluster config:
	{Name:pause-111763 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pause-111763 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.135 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-alias
es:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:38:12.836671 1215662 iso.go:125] acquiring lock: {Name:mk51e1827709f7a3810dbd898083f8185ece65eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 13:38:12.839010 1215662 out.go:177] * Starting "pause-111763" primary control-plane node in "pause-111763" cluster
	I0407 13:38:12.840629 1215662 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 13:38:12.840698 1215662 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0407 13:38:12.840712 1215662 cache.go:56] Caching tarball of preloaded images
	I0407 13:38:12.840821 1215662 preload.go:172] Found /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0407 13:38:12.840839 1215662 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0407 13:38:12.841035 1215662 profile.go:143] Saving config to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/pause-111763/config.json ...
	I0407 13:38:12.841289 1215662 start.go:360] acquireMachinesLock for pause-111763: {Name:mk51d4c744fa92d56cf6ac11b1e792c85ef6709a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 13:38:12.841342 1215662 start.go:364] duration metric: took 30.538µs to acquireMachinesLock for "pause-111763"
	I0407 13:38:12.841364 1215662 start.go:96] Skipping create...Using existing machine configuration
	I0407 13:38:12.841373 1215662 fix.go:54] fixHost starting: 
	I0407 13:38:12.841694 1215662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:38:12.841751 1215662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:38:12.858568 1215662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46103
	I0407 13:38:12.859080 1215662 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:38:12.859632 1215662 main.go:141] libmachine: Using API Version  1
	I0407 13:38:12.859650 1215662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:38:12.860076 1215662 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:38:12.860356 1215662 main.go:141] libmachine: (pause-111763) Calling .DriverName
	I0407 13:38:12.860590 1215662 main.go:141] libmachine: (pause-111763) Calling .GetState
	I0407 13:38:12.862792 1215662 fix.go:112] recreateIfNeeded on pause-111763: state=Running err=<nil>
	W0407 13:38:12.862823 1215662 fix.go:138] unexpected machine state, will restart: <nil>
	I0407 13:38:12.865411 1215662 out.go:177] * Updating the running kvm2 "pause-111763" VM ...
	I0407 13:38:11.380309 1214596 pod_ready.go:103] pod "metrics-server-f79f97bbb-nwxq2" in "kube-system" namespace has status "Ready":"False"
	I0407 13:38:13.877384 1214596 pod_ready.go:103] pod "metrics-server-f79f97bbb-nwxq2" in "kube-system" namespace has status "Ready":"False"
	I0407 13:38:12.553096 1214786 pod_ready.go:103] pod "metrics-server-f79f97bbb-vntf4" in "kube-system" namespace has status "Ready":"False"
	I0407 13:38:15.052537 1214786 pod_ready.go:103] pod "metrics-server-f79f97bbb-vntf4" in "kube-system" namespace has status "Ready":"False"
	I0407 13:38:13.804758 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:38:13.818792 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:38:13.818873 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:38:13.855066 1213906 cri.go:89] found id: ""
	I0407 13:38:13.855101 1213906 logs.go:282] 0 containers: []
	W0407 13:38:13.855111 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:38:13.855118 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:38:13.855177 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:38:13.892476 1213906 cri.go:89] found id: ""
	I0407 13:38:13.892508 1213906 logs.go:282] 0 containers: []
	W0407 13:38:13.892519 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:38:13.892527 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:38:13.892595 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:38:13.927175 1213906 cri.go:89] found id: ""
	I0407 13:38:13.927208 1213906 logs.go:282] 0 containers: []
	W0407 13:38:13.927217 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:38:13.927224 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:38:13.927312 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:38:13.971556 1213906 cri.go:89] found id: ""
	I0407 13:38:13.971581 1213906 logs.go:282] 0 containers: []
	W0407 13:38:13.971591 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:38:13.971599 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:38:13.971662 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:38:14.011793 1213906 cri.go:89] found id: ""
	I0407 13:38:14.011824 1213906 logs.go:282] 0 containers: []
	W0407 13:38:14.011835 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:38:14.011843 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:38:14.011925 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:38:14.050493 1213906 cri.go:89] found id: ""
	I0407 13:38:14.050527 1213906 logs.go:282] 0 containers: []
	W0407 13:38:14.050538 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:38:14.050547 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:38:14.050617 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:38:14.085673 1213906 cri.go:89] found id: ""
	I0407 13:38:14.085724 1213906 logs.go:282] 0 containers: []
	W0407 13:38:14.085737 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:38:14.085746 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:38:14.085812 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:38:14.131856 1213906 cri.go:89] found id: ""
	I0407 13:38:14.131893 1213906 logs.go:282] 0 containers: []
	W0407 13:38:14.131906 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:38:14.131920 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:38:14.131937 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:38:14.185085 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:38:14.185138 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:38:14.199586 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:38:14.199625 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:38:14.277571 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:38:14.277604 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:38:14.277624 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:38:14.353802 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:38:14.353859 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:38:16.895403 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:38:16.909675 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:38:16.909846 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:38:16.945406 1213906 cri.go:89] found id: ""
	I0407 13:38:16.945455 1213906 logs.go:282] 0 containers: []
	W0407 13:38:16.945484 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:38:16.945494 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:38:16.945574 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:38:16.983588 1213906 cri.go:89] found id: ""
	I0407 13:38:16.983626 1213906 logs.go:282] 0 containers: []
	W0407 13:38:16.983638 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:38:16.983647 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:38:16.983717 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:38:17.020444 1213906 cri.go:89] found id: ""
	I0407 13:38:17.020487 1213906 logs.go:282] 0 containers: []
	W0407 13:38:17.020501 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:38:17.020510 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:38:17.020593 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:38:17.060614 1213906 cri.go:89] found id: ""
	I0407 13:38:17.060657 1213906 logs.go:282] 0 containers: []
	W0407 13:38:17.060669 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:38:17.060678 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:38:17.060762 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:38:17.105096 1213906 cri.go:89] found id: ""
	I0407 13:38:17.105136 1213906 logs.go:282] 0 containers: []
	W0407 13:38:17.105148 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:38:17.105156 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:38:17.105237 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:38:17.144101 1213906 cri.go:89] found id: ""
	I0407 13:38:17.144140 1213906 logs.go:282] 0 containers: []
	W0407 13:38:17.144156 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:38:17.144166 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:38:17.144242 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:38:17.190569 1213906 cri.go:89] found id: ""
	I0407 13:38:17.190602 1213906 logs.go:282] 0 containers: []
	W0407 13:38:17.190613 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:38:17.190621 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:38:17.190693 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:38:17.233997 1213906 cri.go:89] found id: ""
	I0407 13:38:17.234030 1213906 logs.go:282] 0 containers: []
	W0407 13:38:17.234039 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:38:17.234051 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:38:17.234065 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:38:17.321443 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:38:17.321495 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:38:17.370755 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:38:17.370794 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:38:17.429210 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:38:17.429268 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:38:17.444684 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:38:17.444722 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:38:17.522630 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:38:12.867268 1215662 machine.go:93] provisionDockerMachine start ...
	I0407 13:38:12.867310 1215662 main.go:141] libmachine: (pause-111763) Calling .DriverName
	I0407 13:38:12.867709 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHHostname
	I0407 13:38:12.872753 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:12.873448 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:12.873491 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:12.873757 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHPort
	I0407 13:38:12.873993 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:12.874207 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:12.874508 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHUsername
	I0407 13:38:12.874871 1215662 main.go:141] libmachine: Using SSH client type: native
	I0407 13:38:12.875261 1215662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.135 22 <nil> <nil>}
	I0407 13:38:12.875287 1215662 main.go:141] libmachine: About to run SSH command:
	hostname
	I0407 13:38:12.978730 1215662 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-111763
	
	I0407 13:38:12.978768 1215662 main.go:141] libmachine: (pause-111763) Calling .GetMachineName
	I0407 13:38:12.979116 1215662 buildroot.go:166] provisioning hostname "pause-111763"
	I0407 13:38:12.979144 1215662 main.go:141] libmachine: (pause-111763) Calling .GetMachineName
	I0407 13:38:12.979355 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHHostname
	I0407 13:38:12.982540 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:12.982907 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:12.982932 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:12.983051 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHPort
	I0407 13:38:12.983270 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:12.983499 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:12.983639 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHUsername
	I0407 13:38:12.983819 1215662 main.go:141] libmachine: Using SSH client type: native
	I0407 13:38:12.984097 1215662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.135 22 <nil> <nil>}
	I0407 13:38:12.984122 1215662 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-111763 && echo "pause-111763" | sudo tee /etc/hostname
	I0407 13:38:13.100478 1215662 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-111763
	
	I0407 13:38:13.100506 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHHostname
	I0407 13:38:13.103717 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:13.104189 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:13.104228 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:13.104473 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHPort
	I0407 13:38:13.104721 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:13.104978 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:13.105172 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHUsername
	I0407 13:38:13.105441 1215662 main.go:141] libmachine: Using SSH client type: native
	I0407 13:38:13.105661 1215662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.135 22 <nil> <nil>}
	I0407 13:38:13.105679 1215662 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-111763' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-111763/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-111763' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 13:38:13.215373 1215662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 13:38:13.215409 1215662 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20602-1162386/.minikube CaCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20602-1162386/.minikube}
	I0407 13:38:13.215440 1215662 buildroot.go:174] setting up certificates
	I0407 13:38:13.215455 1215662 provision.go:84] configureAuth start
	I0407 13:38:13.215466 1215662 main.go:141] libmachine: (pause-111763) Calling .GetMachineName
	I0407 13:38:13.215858 1215662 main.go:141] libmachine: (pause-111763) Calling .GetIP
	I0407 13:38:13.219032 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:13.219477 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:13.219510 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:13.219671 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHHostname
	I0407 13:38:13.222926 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:13.223484 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:13.223519 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:13.223722 1215662 provision.go:143] copyHostCerts
	I0407 13:38:13.223788 1215662 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem, removing ...
	I0407 13:38:13.223809 1215662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem
	I0407 13:38:13.223882 1215662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem (1123 bytes)
	I0407 13:38:13.223977 1215662 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem, removing ...
	I0407 13:38:13.223985 1215662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem
	I0407 13:38:13.224012 1215662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem (1675 bytes)
	I0407 13:38:13.224069 1215662 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem, removing ...
	I0407 13:38:13.224077 1215662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem
	I0407 13:38:13.224098 1215662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem (1078 bytes)
	I0407 13:38:13.224146 1215662 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem org=jenkins.pause-111763 san=[127.0.0.1 192.168.50.135 localhost minikube pause-111763]
	I0407 13:38:13.783449 1215662 provision.go:177] copyRemoteCerts
	I0407 13:38:13.783532 1215662 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 13:38:13.783560 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHHostname
	I0407 13:38:13.786663 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:13.786980 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:13.787006 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:13.787249 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHPort
	I0407 13:38:13.787577 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:13.787757 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHUsername
	I0407 13:38:13.787936 1215662 sshutil.go:53] new ssh client: &{IP:192.168.50.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/pause-111763/id_rsa Username:docker}
	I0407 13:38:13.871639 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 13:38:13.901982 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0407 13:38:13.936509 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0407 13:38:13.968492 1215662 provision.go:87] duration metric: took 753.020928ms to configureAuth
	I0407 13:38:13.968526 1215662 buildroot.go:189] setting minikube options for container-runtime
	I0407 13:38:13.968780 1215662 config.go:182] Loaded profile config "pause-111763": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:38:13.968864 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHHostname
	I0407 13:38:13.971825 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:13.972234 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:13.972272 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:13.972509 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHPort
	I0407 13:38:13.972789 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:13.973007 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:13.973226 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHUsername
	I0407 13:38:13.973414 1215662 main.go:141] libmachine: Using SSH client type: native
	I0407 13:38:13.973740 1215662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.135 22 <nil> <nil>}
	I0407 13:38:13.973763 1215662 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0407 13:38:15.878556 1214596 pod_ready.go:103] pod "metrics-server-f79f97bbb-nwxq2" in "kube-system" namespace has status "Ready":"False"
	I0407 13:38:18.380673 1214596 pod_ready.go:103] pod "metrics-server-f79f97bbb-nwxq2" in "kube-system" namespace has status "Ready":"False"
	I0407 13:38:17.053095 1214786 pod_ready.go:103] pod "metrics-server-f79f97bbb-vntf4" in "kube-system" namespace has status "Ready":"False"
	I0407 13:38:19.552489 1214786 pod_ready.go:103] pod "metrics-server-f79f97bbb-vntf4" in "kube-system" namespace has status "Ready":"False"
	I0407 13:38:19.577044 1215662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0407 13:38:19.577077 1215662 machine.go:96] duration metric: took 6.709784998s to provisionDockerMachine
	I0407 13:38:19.577090 1215662 start.go:293] postStartSetup for "pause-111763" (driver="kvm2")
	I0407 13:38:19.577107 1215662 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 13:38:19.577130 1215662 main.go:141] libmachine: (pause-111763) Calling .DriverName
	I0407 13:38:19.577633 1215662 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 13:38:19.577670 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHHostname
	I0407 13:38:19.581841 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:19.582319 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:19.582356 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:19.582593 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHPort
	I0407 13:38:19.582936 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:19.583194 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHUsername
	I0407 13:38:19.583393 1215662 sshutil.go:53] new ssh client: &{IP:192.168.50.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/pause-111763/id_rsa Username:docker}
	I0407 13:38:19.664859 1215662 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 13:38:19.669619 1215662 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 13:38:19.669656 1215662 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-1162386/.minikube/addons for local assets ...
	I0407 13:38:19.669758 1215662 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-1162386/.minikube/files for local assets ...
	I0407 13:38:19.669859 1215662 filesync.go:149] local asset: /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem -> 11697162.pem in /etc/ssl/certs
	I0407 13:38:19.669998 1215662 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 13:38:19.680625 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem --> /etc/ssl/certs/11697162.pem (1708 bytes)
	I0407 13:38:19.709269 1215662 start.go:296] duration metric: took 132.160546ms for postStartSetup
	I0407 13:38:19.709314 1215662 fix.go:56] duration metric: took 6.867940004s for fixHost
	I0407 13:38:19.709343 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHHostname
	I0407 13:38:19.713032 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:19.713533 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:19.713569 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:19.713767 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHPort
	I0407 13:38:19.714053 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:19.714338 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:19.714589 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHUsername
	I0407 13:38:19.714832 1215662 main.go:141] libmachine: Using SSH client type: native
	I0407 13:38:19.715051 1215662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.135 22 <nil> <nil>}
	I0407 13:38:19.715064 1215662 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0407 13:38:19.823753 1215662 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744033099.814927524
	
	I0407 13:38:19.823786 1215662 fix.go:216] guest clock: 1744033099.814927524
	I0407 13:38:19.823798 1215662 fix.go:229] Guest: 2025-04-07 13:38:19.814927524 +0000 UTC Remote: 2025-04-07 13:38:19.709319613 +0000 UTC m=+7.045075644 (delta=105.607911ms)
	I0407 13:38:19.823828 1215662 fix.go:200] guest clock delta is within tolerance: 105.607911ms
	I0407 13:38:19.823835 1215662 start.go:83] releasing machines lock for "pause-111763", held for 6.982480025s
	I0407 13:38:19.823860 1215662 main.go:141] libmachine: (pause-111763) Calling .DriverName
	I0407 13:38:19.824214 1215662 main.go:141] libmachine: (pause-111763) Calling .GetIP
	I0407 13:38:19.828076 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:19.828644 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:19.828700 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:19.829126 1215662 main.go:141] libmachine: (pause-111763) Calling .DriverName
	I0407 13:38:19.829968 1215662 main.go:141] libmachine: (pause-111763) Calling .DriverName
	I0407 13:38:19.830223 1215662 main.go:141] libmachine: (pause-111763) Calling .DriverName
	I0407 13:38:19.830339 1215662 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0407 13:38:19.830401 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHHostname
	I0407 13:38:19.830533 1215662 ssh_runner.go:195] Run: cat /version.json
	I0407 13:38:19.830557 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHHostname
	I0407 13:38:19.834202 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:19.834241 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:19.834706 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:19.834742 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:19.834769 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:19.834784 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:19.835040 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHPort
	I0407 13:38:19.835165 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHPort
	I0407 13:38:19.835305 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:19.835398 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHKeyPath
	I0407 13:38:19.835484 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHUsername
	I0407 13:38:19.835547 1215662 main.go:141] libmachine: (pause-111763) Calling .GetSSHUsername
	I0407 13:38:19.835630 1215662 sshutil.go:53] new ssh client: &{IP:192.168.50.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/pause-111763/id_rsa Username:docker}
	I0407 13:38:19.836008 1215662 sshutil.go:53] new ssh client: &{IP:192.168.50.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/pause-111763/id_rsa Username:docker}
	I0407 13:38:19.940480 1215662 ssh_runner.go:195] Run: systemctl --version
	I0407 13:38:19.947496 1215662 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0407 13:38:20.107630 1215662 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0407 13:38:20.115827 1215662 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 13:38:20.115931 1215662 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 13:38:20.127442 1215662 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0407 13:38:20.127504 1215662 start.go:495] detecting cgroup driver to use...
	I0407 13:38:20.127588 1215662 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 13:38:20.150765 1215662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 13:38:20.168683 1215662 docker.go:217] disabling cri-docker service (if available) ...
	I0407 13:38:20.168784 1215662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0407 13:38:20.186228 1215662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0407 13:38:20.205015 1215662 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0407 13:38:20.365027 1215662 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0407 13:38:20.553245 1215662 docker.go:233] disabling docker service ...
	I0407 13:38:20.553328 1215662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0407 13:38:20.576783 1215662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0407 13:38:20.594397 1215662 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0407 13:38:20.740611 1215662 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0407 13:38:20.877899 1215662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0407 13:38:20.894874 1215662 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 13:38:20.916346 1215662 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0407 13:38:20.916424 1215662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:38:20.931160 1215662 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0407 13:38:20.931241 1215662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:38:20.944392 1215662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:38:20.958586 1215662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:38:20.971927 1215662 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 13:38:20.983810 1215662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:38:20.997486 1215662 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:38:21.011799 1215662 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:38:21.025083 1215662 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 13:38:21.037026 1215662 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 13:38:21.049000 1215662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:38:21.196504 1215662 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0407 13:38:21.458511 1215662 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0407 13:38:21.458603 1215662 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0407 13:38:21.463966 1215662 start.go:563] Will wait 60s for crictl version
	I0407 13:38:21.464048 1215662 ssh_runner.go:195] Run: which crictl
	I0407 13:38:21.468284 1215662 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 13:38:21.509041 1215662 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0407 13:38:21.509181 1215662 ssh_runner.go:195] Run: crio --version
	I0407 13:38:21.544229 1215662 ssh_runner.go:195] Run: crio --version
	I0407 13:38:21.580336 1215662 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0407 13:38:20.022948 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:38:20.037136 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:38:20.037218 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:38:20.076138 1213906 cri.go:89] found id: ""
	I0407 13:38:20.076168 1213906 logs.go:282] 0 containers: []
	W0407 13:38:20.076177 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:38:20.076183 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:38:20.076254 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:38:20.116308 1213906 cri.go:89] found id: ""
	I0407 13:38:20.116347 1213906 logs.go:282] 0 containers: []
	W0407 13:38:20.116357 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:38:20.116366 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:38:20.116425 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:38:20.154226 1213906 cri.go:89] found id: ""
	I0407 13:38:20.154261 1213906 logs.go:282] 0 containers: []
	W0407 13:38:20.154286 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:38:20.154293 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:38:20.154358 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:38:20.193534 1213906 cri.go:89] found id: ""
	I0407 13:38:20.193570 1213906 logs.go:282] 0 containers: []
	W0407 13:38:20.193581 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:38:20.193590 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:38:20.193658 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:38:20.233242 1213906 cri.go:89] found id: ""
	I0407 13:38:20.233280 1213906 logs.go:282] 0 containers: []
	W0407 13:38:20.233292 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:38:20.233300 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:38:20.233379 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:38:20.273298 1213906 cri.go:89] found id: ""
	I0407 13:38:20.273340 1213906 logs.go:282] 0 containers: []
	W0407 13:38:20.273354 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:38:20.273364 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:38:20.273483 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:38:20.317495 1213906 cri.go:89] found id: ""
	I0407 13:38:20.317538 1213906 logs.go:282] 0 containers: []
	W0407 13:38:20.317548 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:38:20.317554 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:38:20.317611 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:38:20.356020 1213906 cri.go:89] found id: ""
	I0407 13:38:20.356054 1213906 logs.go:282] 0 containers: []
	W0407 13:38:20.356063 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:38:20.356074 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:38:20.356087 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:38:20.424550 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:38:20.424618 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:38:20.444415 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:38:20.444454 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:38:20.533211 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:38:20.533242 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:38:20.533274 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:38:20.635661 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:38:20.635729 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:38:21.581865 1215662 main.go:141] libmachine: (pause-111763) Calling .GetIP
	I0407 13:38:21.585667 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:21.586228 1215662 main.go:141] libmachine: (pause-111763) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:6a:43", ip: ""} in network mk-pause-111763: {Iface:virbr2 ExpiryTime:2025-04-07 14:37:26 +0000 UTC Type:0 Mac:52:54:00:6e:6a:43 Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:pause-111763 Clientid:01:52:54:00:6e:6a:43}
	I0407 13:38:21.586263 1215662 main.go:141] libmachine: (pause-111763) DBG | domain pause-111763 has defined IP address 192.168.50.135 and MAC address 52:54:00:6e:6a:43 in network mk-pause-111763
	I0407 13:38:21.586557 1215662 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0407 13:38:21.592056 1215662 kubeadm.go:883] updating cluster {Name:pause-111763 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pause-111763 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.135 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-secu
rity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 13:38:21.592237 1215662 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 13:38:21.592290 1215662 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 13:38:21.643329 1215662 crio.go:514] all images are preloaded for cri-o runtime.
	I0407 13:38:21.643354 1215662 crio.go:433] Images already preloaded, skipping extraction
	I0407 13:38:21.643412 1215662 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 13:38:21.681481 1215662 crio.go:514] all images are preloaded for cri-o runtime.
	I0407 13:38:21.681519 1215662 cache_images.go:84] Images are preloaded, skipping loading
	I0407 13:38:21.681531 1215662 kubeadm.go:934] updating node { 192.168.50.135 8443 v1.32.2 crio true true} ...
	I0407 13:38:21.681693 1215662 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-111763 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.135
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:pause-111763 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 13:38:21.681812 1215662 ssh_runner.go:195] Run: crio config
	I0407 13:38:21.769215 1215662 cni.go:84] Creating CNI manager for ""
	I0407 13:38:21.769286 1215662 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 13:38:21.769304 1215662 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0407 13:38:21.769335 1215662 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.135 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-111763 NodeName:pause-111763 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.135"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.135 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0407 13:38:21.769515 1215662 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.135
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-111763"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.135"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.135"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 13:38:21.769674 1215662 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0407 13:38:21.801098 1215662 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 13:38:21.801203 1215662 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 13:38:21.816962 1215662 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0407 13:38:21.837283 1215662 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 13:38:21.944712 1215662 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0407 13:38:22.025517 1215662 ssh_runner.go:195] Run: grep 192.168.50.135	control-plane.minikube.internal$ /etc/hosts
	I0407 13:38:22.032832 1215662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:38:22.344255 1215662 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:38:22.422193 1215662 certs.go:68] Setting up /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/pause-111763 for IP: 192.168.50.135
	I0407 13:38:22.422223 1215662 certs.go:194] generating shared ca certs ...
	I0407 13:38:22.422258 1215662 certs.go:226] acquiring lock for ca certs: {Name:mk8e89191fca7f2111bdd08c345368f593b0d5d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:38:22.422458 1215662 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.key
	I0407 13:38:22.422532 1215662 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.key
	I0407 13:38:22.422577 1215662 certs.go:256] generating profile certs ...
	I0407 13:38:22.422706 1215662 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/pause-111763/client.key
	I0407 13:38:22.422794 1215662 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/pause-111763/apiserver.key.12705a14
	I0407 13:38:22.422855 1215662 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/pause-111763/proxy-client.key
	I0407 13:38:22.423026 1215662 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716.pem (1338 bytes)
	W0407 13:38:22.423071 1215662 certs.go:480] ignoring /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716_empty.pem, impossibly tiny 0 bytes
	I0407 13:38:22.423081 1215662 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem (1675 bytes)
	I0407 13:38:22.423120 1215662 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem (1078 bytes)
	I0407 13:38:22.423151 1215662 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem (1123 bytes)
	I0407 13:38:22.423181 1215662 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem (1675 bytes)
	I0407 13:38:22.423273 1215662 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem (1708 bytes)
	I0407 13:38:22.424104 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 13:38:22.548050 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 13:38:22.633918 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 13:38:20.878291 1214596 pod_ready.go:103] pod "metrics-server-f79f97bbb-nwxq2" in "kube-system" namespace has status "Ready":"False"
	I0407 13:38:23.380633 1214596 pod_ready.go:103] pod "metrics-server-f79f97bbb-nwxq2" in "kube-system" namespace has status "Ready":"False"
	I0407 13:38:21.552936 1214786 pod_ready.go:103] pod "metrics-server-f79f97bbb-vntf4" in "kube-system" namespace has status "Ready":"False"
	I0407 13:38:23.556036 1214786 pod_ready.go:103] pod "metrics-server-f79f97bbb-vntf4" in "kube-system" namespace has status "Ready":"False"
	I0407 13:38:23.179699 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:38:23.195603 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:38:23.195701 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:38:23.247984 1213906 cri.go:89] found id: ""
	I0407 13:38:23.248021 1213906 logs.go:282] 0 containers: []
	W0407 13:38:23.248030 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:38:23.248037 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:38:23.248113 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:38:23.297330 1213906 cri.go:89] found id: ""
	I0407 13:38:23.297367 1213906 logs.go:282] 0 containers: []
	W0407 13:38:23.297380 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:38:23.297389 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:38:23.297465 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:38:23.342695 1213906 cri.go:89] found id: ""
	I0407 13:38:23.342732 1213906 logs.go:282] 0 containers: []
	W0407 13:38:23.342745 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:38:23.342754 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:38:23.342854 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:38:23.390557 1213906 cri.go:89] found id: ""
	I0407 13:38:23.390597 1213906 logs.go:282] 0 containers: []
	W0407 13:38:23.390610 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:38:23.390618 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:38:23.390693 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:38:23.436306 1213906 cri.go:89] found id: ""
	I0407 13:38:23.436431 1213906 logs.go:282] 0 containers: []
	W0407 13:38:23.436454 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:38:23.436465 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:38:23.436544 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:38:23.489592 1213906 cri.go:89] found id: ""
	I0407 13:38:23.489635 1213906 logs.go:282] 0 containers: []
	W0407 13:38:23.489647 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:38:23.489656 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:38:23.489757 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:38:23.549612 1213906 cri.go:89] found id: ""
	I0407 13:38:23.549665 1213906 logs.go:282] 0 containers: []
	W0407 13:38:23.549679 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:38:23.549688 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:38:23.549803 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:38:23.593793 1213906 cri.go:89] found id: ""
	I0407 13:38:23.593834 1213906 logs.go:282] 0 containers: []
	W0407 13:38:23.593846 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:38:23.593861 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:38:23.593882 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:38:23.613155 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:38:23.613214 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:38:23.692080 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:38:23.692115 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:38:23.692134 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:38:23.792659 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:38:23.792710 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:38:23.867830 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:38:23.867872 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:38:26.435191 1213906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:38:26.450136 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:38:26.450228 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:38:26.486457 1213906 cri.go:89] found id: ""
	I0407 13:38:26.486498 1213906 logs.go:282] 0 containers: []
	W0407 13:38:26.486510 1213906 logs.go:284] No container was found matching "kube-apiserver"
	I0407 13:38:26.486520 1213906 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:38:26.486605 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:38:26.523604 1213906 cri.go:89] found id: ""
	I0407 13:38:26.523642 1213906 logs.go:282] 0 containers: []
	W0407 13:38:26.523655 1213906 logs.go:284] No container was found matching "etcd"
	I0407 13:38:26.523663 1213906 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:38:26.523737 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:38:26.563215 1213906 cri.go:89] found id: ""
	I0407 13:38:26.563253 1213906 logs.go:282] 0 containers: []
	W0407 13:38:26.563276 1213906 logs.go:284] No container was found matching "coredns"
	I0407 13:38:26.563284 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:38:26.563353 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:38:26.597983 1213906 cri.go:89] found id: ""
	I0407 13:38:26.598018 1213906 logs.go:282] 0 containers: []
	W0407 13:38:26.598030 1213906 logs.go:284] No container was found matching "kube-scheduler"
	I0407 13:38:26.598038 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:38:26.598111 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:38:26.636270 1213906 cri.go:89] found id: ""
	I0407 13:38:26.636304 1213906 logs.go:282] 0 containers: []
	W0407 13:38:26.636313 1213906 logs.go:284] No container was found matching "kube-proxy"
	I0407 13:38:26.636323 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:38:26.636395 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:38:26.675668 1213906 cri.go:89] found id: ""
	I0407 13:38:26.675705 1213906 logs.go:282] 0 containers: []
	W0407 13:38:26.675717 1213906 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 13:38:26.675731 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:38:26.675828 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:38:26.713079 1213906 cri.go:89] found id: ""
	I0407 13:38:26.713109 1213906 logs.go:282] 0 containers: []
	W0407 13:38:26.713119 1213906 logs.go:284] No container was found matching "kindnet"
	I0407 13:38:26.713126 1213906 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:38:26.713235 1213906 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:38:26.751390 1213906 cri.go:89] found id: ""
	I0407 13:38:26.751419 1213906 logs.go:282] 0 containers: []
	W0407 13:38:26.751434 1213906 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 13:38:26.751445 1213906 logs.go:123] Gathering logs for container status ...
	I0407 13:38:26.751457 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:38:26.792848 1213906 logs.go:123] Gathering logs for kubelet ...
	I0407 13:38:26.792890 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:38:26.846159 1213906 logs.go:123] Gathering logs for dmesg ...
	I0407 13:38:26.846214 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:38:26.860024 1213906 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:38:26.860061 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 13:38:26.935582 1213906 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 13:38:26.935610 1213906 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:38:26.935624 1213906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:38:22.746116 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0407 13:38:22.848444 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/pause-111763/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0407 13:38:22.964506 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/pause-111763/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0407 13:38:23.029616 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/pause-111763/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 13:38:23.070863 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/pause-111763/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0407 13:38:23.120929 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 13:38:23.166804 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716.pem --> /usr/share/ca-certificates/1169716.pem (1338 bytes)
	I0407 13:38:23.217442 1215662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem --> /usr/share/ca-certificates/11697162.pem (1708 bytes)
	I0407 13:38:23.266209 1215662 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 13:38:23.303454 1215662 ssh_runner.go:195] Run: openssl version
	I0407 13:38:23.322699 1215662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 13:38:23.348173 1215662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:38:23.357410 1215662 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:14 /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:38:23.357495 1215662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:38:23.378370 1215662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 13:38:23.398674 1215662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1169716.pem && ln -fs /usr/share/ca-certificates/1169716.pem /etc/ssl/certs/1169716.pem"
	I0407 13:38:23.425964 1215662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1169716.pem
	I0407 13:38:23.439064 1215662 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 12:22 /usr/share/ca-certificates/1169716.pem
	I0407 13:38:23.439154 1215662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1169716.pem
	I0407 13:38:23.470410 1215662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1169716.pem /etc/ssl/certs/51391683.0"
	I0407 13:38:23.496142 1215662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11697162.pem && ln -fs /usr/share/ca-certificates/11697162.pem /etc/ssl/certs/11697162.pem"
	I0407 13:38:23.520565 1215662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11697162.pem
	I0407 13:38:23.532477 1215662 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 12:22 /usr/share/ca-certificates/11697162.pem
	I0407 13:38:23.532559 1215662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11697162.pem
	I0407 13:38:23.542502 1215662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11697162.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 13:38:23.567739 1215662 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 13:38:23.575186 1215662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0407 13:38:23.589372 1215662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0407 13:38:23.599661 1215662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0407 13:38:23.609524 1215662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0407 13:38:23.619994 1215662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0407 13:38:23.632393 1215662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0407 13:38:23.639390 1215662 kubeadm.go:392] StartCluster: {Name:pause-111763 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pause-111763 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.135 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-securit
y-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:38:23.639552 1215662 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0407 13:38:23.639620 1215662 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0407 13:38:23.726396 1215662 cri.go:89] found id: "5810b858b178854318d9f75335b6fbfe66d2877cd1c9a3b05feb45efc1f269bf"
	I0407 13:38:23.726432 1215662 cri.go:89] found id: "c07cdd2b6b317cb6a72baa68228fcebdf5c9ea28202d01df5b8f10d30cc41cc7"
	I0407 13:38:23.726438 1215662 cri.go:89] found id: "2fe876f74127fa8f6bde0d8368f91d5569eedbef6f1a21083f63b9b96193a6f5"
	I0407 13:38:23.726443 1215662 cri.go:89] found id: "7eed44592009903caf00bff1724f6970ed877b199d1d3d2f6be2073d585ba6bc"
	I0407 13:38:23.726447 1215662 cri.go:89] found id: "6f44aca5b3b4338727c1cd156ccabe024868350e7ba51a714a9fd79b952a60b4"
	I0407 13:38:23.726452 1215662 cri.go:89] found id: "bb00dc1832f25b934ca5d16864198cb029f701ab499845a4a2a2a29cd431b4e3"
	I0407 13:38:23.726456 1215662 cri.go:89] found id: "67fa1096fcd3ea1582575a7ad88facc5d40891b4ce4bdfc09ca8640a410969e3"
	I0407 13:38:23.726459 1215662 cri.go:89] found id: "e1252cc8e7eec024319de723d3f8a2557c52be660e46c095f9ea76af09c96331"
	I0407 13:38:23.726463 1215662 cri.go:89] found id: "180d98c8602766e97df035ab78eb3a4d3424f59553aee8a5d192706cd186af28"
	I0407 13:38:23.726471 1215662 cri.go:89] found id: "b9afe851415fcb11215dbfcb715d5b3820bac9d1a65806a07f17ff5785b040f1"
	I0407 13:38:23.726475 1215662 cri.go:89] found id: "b2a1ba832cc573354c0a61f3a3bc63b52e50468adf55fd01288a78d9e6b3f04c"
	I0407 13:38:23.726478 1215662 cri.go:89] found id: "3f15fd98836e3a799770112d2e9b005ea681584c46ad08db234624b9233805b7"
	I0407 13:38:23.726483 1215662 cri.go:89] found id: ""
	I0407 13:38:23.726547 1215662 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-111763 -n pause-111763
helpers_test.go:261: (dbg) Run:  kubectl --context pause-111763 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (61.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:44:40.604288 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/no-preload-028452/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:44:40.610860 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/no-preload-028452/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:44:40.622359 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/no-preload-028452/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:44:40.643942 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/no-preload-028452/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:44:40.685506 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/no-preload-028452/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:44:40.767160 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/no-preload-028452/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:44:40.929072 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/no-preload-028452/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:44:41.251154 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/no-preload-028452/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:44:41.892505 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/no-preload-028452/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:44:43.174464 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/no-preload-028452/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:44:50.858545 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/no-preload-028452/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:45:01.100810 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/no-preload-028452/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:45:08.007997 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:45:21.582884 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/no-preload-028452/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:46:02.544458 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/no-preload-028452/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:46:09.343351 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/functional-728898/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:47:04.915750 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:47:24.466768 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/no-preload-028452/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
I0407 13:48:27.935423 1169716 config.go:182] Loaded profile config "calico-056871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:49:40.604849 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/no-preload-028452/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:51:09.343640 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/functional-728898/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:51:22.547619 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/auto-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:51:22.554197 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/auto-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:51:22.565851 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/auto-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:51:22.587428 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/auto-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:51:22.629034 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/auto-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:51:22.710633 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/auto-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:51:22.872066 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/auto-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:51:23.194080 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/auto-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
I0407 13:51:28.134756 1169716 config.go:182] Loaded profile config "bridge-056871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:51:32.800965 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/auto-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:51:43.042928 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/auto-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:52:03.525092 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/auto-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:52:04.915746 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:52:32.416035 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/functional-728898/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-435730 -n old-k8s-version-435730
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-435730 -n old-k8s-version-435730: exit status 2 (261.676051ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "old-k8s-version-435730" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-435730 -n old-k8s-version-435730
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-435730 -n old-k8s-version-435730: exit status 2 (265.65166ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-435730 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-056871 sudo iptables                       | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | -t nat -L -n -v                                      |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo                                | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | systemctl status kubelet --all                       |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo                                | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | systemctl cat kubelet                                |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo                                | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | journalctl -xeu kubelet --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo cat                            | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo cat                            | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo                                | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC |                     |
	|         | systemctl status docker --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo                                | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | systemctl cat docker                                 |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo cat                            | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo docker                         | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo                                | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC |                     |
	|         | systemctl status cri-docker                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo                                | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | systemctl cat cri-docker                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo cat                            | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo cat                            | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo                                | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo                                | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC |                     |
	|         | systemctl status containerd                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo                                | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | systemctl cat containerd                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo cat                            | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo cat                            | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo                                | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | containerd config dump                               |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo                                | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | systemctl status crio --all                          |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo                                | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | systemctl cat crio --no-pager                        |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo find                           | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo crio                           | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p bridge-056871                                     | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 13:50:24
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 13:50:24.272012 1230577 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:50:24.272287 1230577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:50:24.272296 1230577 out.go:358] Setting ErrFile to fd 2...
	I0407 13:50:24.272301 1230577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:50:24.272500 1230577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1162386/.minikube/bin
	I0407 13:50:24.273223 1230577 out.go:352] Setting JSON to false
	I0407 13:50:24.274746 1230577 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":19968,"bootTime":1744013856,"procs":298,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 13:50:24.274894 1230577 start.go:139] virtualization: kvm guest
	I0407 13:50:24.277205 1230577 out.go:177] * [bridge-056871] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 13:50:24.278700 1230577 notify.go:220] Checking for updates...
	I0407 13:50:24.278732 1230577 out.go:177]   - MINIKUBE_LOCATION=20602
	I0407 13:50:24.280213 1230577 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 13:50:24.281730 1230577 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20602-1162386/kubeconfig
	I0407 13:50:24.283144 1230577 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1162386/.minikube
	I0407 13:50:24.284452 1230577 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 13:50:24.286197 1230577 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 13:50:24.288646 1230577 config.go:182] Loaded profile config "default-k8s-diff-port-405061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:50:24.288784 1230577 config.go:182] Loaded profile config "flannel-056871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:50:24.288884 1230577 config.go:182] Loaded profile config "old-k8s-version-435730": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0407 13:50:24.289053 1230577 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 13:50:24.333039 1230577 out.go:177] * Using the kvm2 driver based on user configuration
	I0407 13:50:24.334437 1230577 start.go:297] selected driver: kvm2
	I0407 13:50:24.334487 1230577 start.go:901] validating driver "kvm2" against <nil>
	I0407 13:50:24.334505 1230577 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 13:50:24.336072 1230577 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 13:50:24.336312 1230577 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20602-1162386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0407 13:50:24.356560 1230577 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0407 13:50:24.356627 1230577 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 13:50:24.356862 1230577 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 13:50:24.356904 1230577 cni.go:84] Creating CNI manager for "bridge"
	I0407 13:50:24.356910 1230577 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0407 13:50:24.356958 1230577 start.go:340] cluster config:
	{Name:bridge-056871 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-056871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:50:24.357078 1230577 iso.go:125] acquiring lock: {Name:mk51e1827709f7a3810dbd898083f8185ece65eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 13:50:24.359372 1230577 out.go:177] * Starting "bridge-056871" primary control-plane node in "bridge-056871" cluster
	I0407 13:50:24.906910 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:24.907631 1229086 main.go:141] libmachine: (flannel-056871) found domain IP: 192.168.61.247
	I0407 13:50:24.907651 1229086 main.go:141] libmachine: (flannel-056871) reserving static IP address...
	I0407 13:50:24.907669 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has current primary IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:24.908143 1229086 main.go:141] libmachine: (flannel-056871) DBG | unable to find host DHCP lease matching {name: "flannel-056871", mac: "52:54:00:b2:bb:50", ip: "192.168.61.247"} in network mk-flannel-056871
	I0407 13:50:25.024395 1229086 main.go:141] libmachine: (flannel-056871) DBG | Getting to WaitForSSH function...
	I0407 13:50:25.024431 1229086 main.go:141] libmachine: (flannel-056871) reserved static IP address 192.168.61.247 for domain flannel-056871
	I0407 13:50:25.024445 1229086 main.go:141] libmachine: (flannel-056871) waiting for SSH...
	I0407 13:50:25.028256 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:25.029260 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:25.029293 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:25.029469 1229086 main.go:141] libmachine: (flannel-056871) DBG | Using SSH client type: external
	I0407 13:50:25.029496 1229086 main.go:141] libmachine: (flannel-056871) DBG | Using SSH private key: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/flannel-056871/id_rsa (-rw-------)
	I0407 13:50:25.029527 1229086 main.go:141] libmachine: (flannel-056871) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.247 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/flannel-056871/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0407 13:50:25.029538 1229086 main.go:141] libmachine: (flannel-056871) DBG | About to run SSH command:
	I0407 13:50:25.029547 1229086 main.go:141] libmachine: (flannel-056871) DBG | exit 0
	I0407 13:50:25.158823 1229086 main.go:141] libmachine: (flannel-056871) DBG | SSH cmd err, output: <nil>: 
	I0407 13:50:25.159177 1229086 main.go:141] libmachine: (flannel-056871) KVM machine creation complete
	I0407 13:50:25.159481 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetConfigRaw
	I0407 13:50:25.160052 1229086 main.go:141] libmachine: (flannel-056871) Calling .DriverName
	I0407 13:50:25.160271 1229086 main.go:141] libmachine: (flannel-056871) Calling .DriverName
	I0407 13:50:25.160437 1229086 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0407 13:50:25.160453 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetState
	I0407 13:50:25.161976 1229086 main.go:141] libmachine: Detecting operating system of created instance...
	I0407 13:50:25.162002 1229086 main.go:141] libmachine: Waiting for SSH to be available...
	I0407 13:50:25.162010 1229086 main.go:141] libmachine: Getting to WaitForSSH function...
	I0407 13:50:25.162019 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHHostname
	I0407 13:50:25.164297 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:25.164661 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:flannel-056871 Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:25.164683 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:25.164814 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHPort
	I0407 13:50:25.165029 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHKeyPath
	I0407 13:50:25.165212 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHKeyPath
	I0407 13:50:25.165340 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHUsername
	I0407 13:50:25.165519 1229086 main.go:141] libmachine: Using SSH client type: native
	I0407 13:50:25.165759 1229086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.247 22 <nil> <nil>}
	I0407 13:50:25.165770 1229086 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0407 13:50:25.273616 1229086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 13:50:25.273644 1229086 main.go:141] libmachine: Detecting the provisioner...
	I0407 13:50:25.273653 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHHostname
	I0407 13:50:25.276517 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:25.276907 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:flannel-056871 Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:25.276943 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:25.277203 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHPort
	I0407 13:50:25.277496 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHKeyPath
	I0407 13:50:25.277725 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHKeyPath
	I0407 13:50:25.277890 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHUsername
	I0407 13:50:25.278114 1229086 main.go:141] libmachine: Using SSH client type: native
	I0407 13:50:25.278425 1229086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.247 22 <nil> <nil>}
	I0407 13:50:25.278446 1229086 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0407 13:50:25.390765 1229086 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0407 13:50:25.390837 1229086 main.go:141] libmachine: found compatible host: buildroot
	I0407 13:50:25.390846 1229086 main.go:141] libmachine: Provisioning with buildroot...
	I0407 13:50:25.390854 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetMachineName
	I0407 13:50:25.391167 1229086 buildroot.go:166] provisioning hostname "flannel-056871"
	I0407 13:50:25.391215 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetMachineName
	I0407 13:50:25.391404 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHHostname
	I0407 13:50:25.394505 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:25.394908 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:flannel-056871 Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:25.394932 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:25.395153 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHPort
	I0407 13:50:25.395368 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHKeyPath
	I0407 13:50:25.395527 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHKeyPath
	I0407 13:50:25.395695 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHUsername
	I0407 13:50:25.395886 1229086 main.go:141] libmachine: Using SSH client type: native
	I0407 13:50:25.396185 1229086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.247 22 <nil> <nil>}
	I0407 13:50:25.396205 1229086 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-056871 && echo "flannel-056871" | sudo tee /etc/hostname
	I0407 13:50:25.523408 1229086 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-056871
	
	I0407 13:50:25.523441 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHHostname
	I0407 13:50:25.526729 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:25.527159 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:flannel-056871 Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:25.527192 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:25.527433 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHPort
	I0407 13:50:25.527628 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHKeyPath
	I0407 13:50:25.527816 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHKeyPath
	I0407 13:50:25.527951 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHUsername
	I0407 13:50:25.528116 1229086 main.go:141] libmachine: Using SSH client type: native
	I0407 13:50:25.528345 1229086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.247 22 <nil> <nil>}
	I0407 13:50:25.528367 1229086 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-056871' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-056871/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-056871' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 13:50:25.648639 1229086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 13:50:25.648677 1229086 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20602-1162386/.minikube CaCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20602-1162386/.minikube}
	I0407 13:50:25.648737 1229086 buildroot.go:174] setting up certificates
	I0407 13:50:25.648757 1229086 provision.go:84] configureAuth start
	I0407 13:50:25.648776 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetMachineName
	I0407 13:50:25.649122 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetIP
	I0407 13:50:25.652794 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:25.653250 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:flannel-056871 Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:25.653279 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:25.653553 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHHostname
	I0407 13:50:25.658448 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:25.659018 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:flannel-056871 Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:25.659051 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:25.659449 1229086 provision.go:143] copyHostCerts
	I0407 13:50:25.659521 1229086 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem, removing ...
	I0407 13:50:25.659542 1229086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem
	I0407 13:50:25.659632 1229086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem (1123 bytes)
	I0407 13:50:25.659734 1229086 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem, removing ...
	I0407 13:50:25.659744 1229086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem
	I0407 13:50:25.659768 1229086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem (1675 bytes)
	I0407 13:50:25.659872 1229086 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem, removing ...
	I0407 13:50:25.659884 1229086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem
	I0407 13:50:25.659922 1229086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem (1078 bytes)
	I0407 13:50:25.659989 1229086 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem org=jenkins.flannel-056871 san=[127.0.0.1 192.168.61.247 flannel-056871 localhost minikube]
	I0407 13:50:25.947030 1229086 provision.go:177] copyRemoteCerts
	I0407 13:50:25.947100 1229086 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 13:50:25.947132 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHHostname
	I0407 13:50:25.950246 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:25.950566 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:flannel-056871 Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:25.950592 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:25.950777 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHPort
	I0407 13:50:25.951048 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHKeyPath
	I0407 13:50:25.951245 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHUsername
	I0407 13:50:25.951386 1229086 sshutil.go:53] new ssh client: &{IP:192.168.61.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/flannel-056871/id_rsa Username:docker}
	I0407 13:50:26.036721 1229086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 13:50:26.062992 1229086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0407 13:50:26.090226 1229086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0407 13:50:26.118689 1229086 provision.go:87] duration metric: took 469.909903ms to configureAuth
	I0407 13:50:26.118728 1229086 buildroot.go:189] setting minikube options for container-runtime
	I0407 13:50:26.118898 1229086 config.go:182] Loaded profile config "flannel-056871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:50:26.118988 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHHostname
	I0407 13:50:24.360978 1230577 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 13:50:24.361073 1230577 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0407 13:50:24.361091 1230577 cache.go:56] Caching tarball of preloaded images
	I0407 13:50:24.361317 1230577 preload.go:172] Found /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0407 13:50:24.361343 1230577 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0407 13:50:24.361473 1230577 profile.go:143] Saving config to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/config.json ...
	I0407 13:50:24.361497 1230577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/config.json: {Name:mk787730f7bdbd4b7af3de86222cd95141114af0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:50:24.361669 1230577 start.go:360] acquireMachinesLock for bridge-056871: {Name:mk51d4c744fa92d56cf6ac11b1e792c85ef6709a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 13:50:26.622792 1230577 start.go:364] duration metric: took 2.261094169s to acquireMachinesLock for "bridge-056871"
	I0407 13:50:26.622881 1230577 start.go:93] Provisioning new machine with config: &{Name:bridge-056871 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:bridge-056871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0407 13:50:26.623069 1230577 start.go:125] createHost starting for "" (driver="kvm2")
	I0407 13:50:24.235049 1220973 pod_ready.go:103] pod "metrics-server-f79f97bbb-m78vh" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:26.732867 1220973 pod_ready.go:103] pod "metrics-server-f79f97bbb-m78vh" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:26.122223 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:26.122584 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:flannel-056871 Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:26.122620 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:26.122805 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHPort
	I0407 13:50:26.123089 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHKeyPath
	I0407 13:50:26.123271 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHKeyPath
	I0407 13:50:26.123429 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHUsername
	I0407 13:50:26.123561 1229086 main.go:141] libmachine: Using SSH client type: native
	I0407 13:50:26.123760 1229086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.247 22 <nil> <nil>}
	I0407 13:50:26.123774 1229086 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0407 13:50:26.364755 1229086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0407 13:50:26.364789 1229086 main.go:141] libmachine: Checking connection to Docker...
	I0407 13:50:26.364800 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetURL
	I0407 13:50:26.366289 1229086 main.go:141] libmachine: (flannel-056871) DBG | using libvirt version 6000000
	I0407 13:50:26.369351 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:26.369870 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:flannel-056871 Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:26.369906 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:26.370118 1229086 main.go:141] libmachine: Docker is up and running!
	I0407 13:50:26.370137 1229086 main.go:141] libmachine: Reticulating splines...
	I0407 13:50:26.370147 1229086 client.go:171] duration metric: took 24.871547676s to LocalClient.Create
	I0407 13:50:26.370181 1229086 start.go:167] duration metric: took 24.871627127s to libmachine.API.Create "flannel-056871"
	I0407 13:50:26.370196 1229086 start.go:293] postStartSetup for "flannel-056871" (driver="kvm2")
	I0407 13:50:26.370210 1229086 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 13:50:26.370241 1229086 main.go:141] libmachine: (flannel-056871) Calling .DriverName
	I0407 13:50:26.370524 1229086 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 13:50:26.370554 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHHostname
	I0407 13:50:26.373284 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:26.373808 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:flannel-056871 Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:26.373840 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:26.374033 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHPort
	I0407 13:50:26.374678 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHKeyPath
	I0407 13:50:26.375055 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHUsername
	I0407 13:50:26.375384 1229086 sshutil.go:53] new ssh client: &{IP:192.168.61.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/flannel-056871/id_rsa Username:docker}
	I0407 13:50:26.460755 1229086 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 13:50:26.465475 1229086 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 13:50:26.465504 1229086 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-1162386/.minikube/addons for local assets ...
	I0407 13:50:26.465597 1229086 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-1162386/.minikube/files for local assets ...
	I0407 13:50:26.465696 1229086 filesync.go:149] local asset: /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem -> 11697162.pem in /etc/ssl/certs
	I0407 13:50:26.465843 1229086 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 13:50:26.476310 1229086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem --> /etc/ssl/certs/11697162.pem (1708 bytes)
	I0407 13:50:26.504930 1229086 start.go:296] duration metric: took 134.717884ms for postStartSetup
	I0407 13:50:26.505003 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetConfigRaw
	I0407 13:50:26.505638 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetIP
	I0407 13:50:26.508444 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:26.508870 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:flannel-056871 Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:26.508897 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:26.509247 1229086 profile.go:143] Saving config to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/config.json ...
	I0407 13:50:26.509497 1229086 start.go:128] duration metric: took 25.03370963s to createHost
	I0407 13:50:26.509528 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHHostname
	I0407 13:50:26.512597 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:26.513151 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:flannel-056871 Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:26.513192 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:26.513495 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHPort
	I0407 13:50:26.513772 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHKeyPath
	I0407 13:50:26.513968 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHKeyPath
	I0407 13:50:26.514138 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHUsername
	I0407 13:50:26.514375 1229086 main.go:141] libmachine: Using SSH client type: native
	I0407 13:50:26.514600 1229086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.247 22 <nil> <nil>}
	I0407 13:50:26.514610 1229086 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0407 13:50:26.622604 1229086 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744033826.573479899
	
	I0407 13:50:26.622636 1229086 fix.go:216] guest clock: 1744033826.573479899
	I0407 13:50:26.622647 1229086 fix.go:229] Guest: 2025-04-07 13:50:26.573479899 +0000 UTC Remote: 2025-04-07 13:50:26.509514288 +0000 UTC m=+25.440251336 (delta=63.965611ms)
	I0407 13:50:26.622676 1229086 fix.go:200] guest clock delta is within tolerance: 63.965611ms
	I0407 13:50:26.622684 1229086 start.go:83] releasing machines lock for "flannel-056871", held for 25.147016716s
	I0407 13:50:26.622719 1229086 main.go:141] libmachine: (flannel-056871) Calling .DriverName
	I0407 13:50:26.623017 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetIP
	I0407 13:50:26.626237 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:26.626627 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:flannel-056871 Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:26.626661 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:26.626774 1229086 main.go:141] libmachine: (flannel-056871) Calling .DriverName
	I0407 13:50:26.627398 1229086 main.go:141] libmachine: (flannel-056871) Calling .DriverName
	I0407 13:50:26.627579 1229086 main.go:141] libmachine: (flannel-056871) Calling .DriverName
	I0407 13:50:26.627661 1229086 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0407 13:50:26.627701 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHHostname
	I0407 13:50:26.627833 1229086 ssh_runner.go:195] Run: cat /version.json
	I0407 13:50:26.627863 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHHostname
	I0407 13:50:26.631003 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:26.631283 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:26.631492 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:flannel-056871 Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:26.631532 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:26.631652 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:flannel-056871 Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:26.631676 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:26.631741 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHPort
	I0407 13:50:26.631895 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHPort
	I0407 13:50:26.631976 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHKeyPath
	I0407 13:50:26.632068 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHKeyPath
	I0407 13:50:26.632135 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHUsername
	I0407 13:50:26.632154 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHUsername
	I0407 13:50:26.632262 1229086 sshutil.go:53] new ssh client: &{IP:192.168.61.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/flannel-056871/id_rsa Username:docker}
	I0407 13:50:26.632312 1229086 sshutil.go:53] new ssh client: &{IP:192.168.61.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/flannel-056871/id_rsa Username:docker}
	I0407 13:50:26.738942 1229086 ssh_runner.go:195] Run: systemctl --version
	I0407 13:50:26.745563 1229086 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0407 13:50:26.918903 1229086 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0407 13:50:26.924740 1229086 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 13:50:26.924829 1229086 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 13:50:26.943036 1229086 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0407 13:50:26.943074 1229086 start.go:495] detecting cgroup driver to use...
	I0407 13:50:26.943159 1229086 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 13:50:26.960762 1229086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 13:50:26.977092 1229086 docker.go:217] disabling cri-docker service (if available) ...
	I0407 13:50:26.977178 1229086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0407 13:50:26.992721 1229086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0407 13:50:27.009013 1229086 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0407 13:50:27.136407 1229086 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0407 13:50:27.281179 1229086 docker.go:233] disabling docker service ...
	I0407 13:50:27.281260 1229086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0407 13:50:27.295527 1229086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0407 13:50:27.309965 1229086 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0407 13:50:27.462386 1229086 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0407 13:50:27.615247 1229086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0407 13:50:27.631262 1229086 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 13:50:27.651829 1229086 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0407 13:50:27.651886 1229086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:50:27.664328 1229086 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0407 13:50:27.664406 1229086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:50:27.677109 1229086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:50:27.688301 1229086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:50:27.700257 1229086 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 13:50:27.712546 1229086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:50:27.724951 1229086 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:50:27.747202 1229086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:50:27.761073 1229086 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 13:50:27.773916 1229086 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0407 13:50:27.774002 1229086 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0407 13:50:27.791701 1229086 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 13:50:27.802543 1229086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:50:27.941266 1229086 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0407 13:50:28.051193 1229086 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0407 13:50:28.051286 1229086 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0407 13:50:28.056697 1229086 start.go:563] Will wait 60s for crictl version
	I0407 13:50:28.056862 1229086 ssh_runner.go:195] Run: which crictl
	I0407 13:50:28.061529 1229086 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 13:50:28.103833 1229086 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0407 13:50:28.103922 1229086 ssh_runner.go:195] Run: crio --version
	I0407 13:50:28.134968 1229086 ssh_runner.go:195] Run: crio --version
	I0407 13:50:28.175545 1229086 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0407 13:50:26.625302 1230577 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0407 13:50:26.625526 1230577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:50:26.625577 1230577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:50:26.646372 1230577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41575
	I0407 13:50:26.646944 1230577 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:50:26.647595 1230577 main.go:141] libmachine: Using API Version  1
	I0407 13:50:26.647621 1230577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:50:26.648019 1230577 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:50:26.648208 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetMachineName
	I0407 13:50:26.648372 1230577 main.go:141] libmachine: (bridge-056871) Calling .DriverName
	I0407 13:50:26.648516 1230577 start.go:159] libmachine.API.Create for "bridge-056871" (driver="kvm2")
	I0407 13:50:26.648543 1230577 client.go:168] LocalClient.Create starting
	I0407 13:50:26.648578 1230577 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem
	I0407 13:50:26.648614 1230577 main.go:141] libmachine: Decoding PEM data...
	I0407 13:50:26.648627 1230577 main.go:141] libmachine: Parsing certificate...
	I0407 13:50:26.648686 1230577 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem
	I0407 13:50:26.648708 1230577 main.go:141] libmachine: Decoding PEM data...
	I0407 13:50:26.648719 1230577 main.go:141] libmachine: Parsing certificate...
	I0407 13:50:26.648734 1230577 main.go:141] libmachine: Running pre-create checks...
	I0407 13:50:26.648744 1230577 main.go:141] libmachine: (bridge-056871) Calling .PreCreateCheck
	I0407 13:50:26.649147 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetConfigRaw
	I0407 13:50:26.649658 1230577 main.go:141] libmachine: Creating machine...
	I0407 13:50:26.649677 1230577 main.go:141] libmachine: (bridge-056871) Calling .Create
	I0407 13:50:26.649891 1230577 main.go:141] libmachine: (bridge-056871) creating KVM machine...
	I0407 13:50:26.649910 1230577 main.go:141] libmachine: (bridge-056871) creating network...
	I0407 13:50:26.651193 1230577 main.go:141] libmachine: (bridge-056871) DBG | found existing default KVM network
	I0407 13:50:26.652004 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:26.651784 1230645 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:87:f6:77} reservation:<nil>}
	I0407 13:50:26.653222 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:26.653125 1230645 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002098c0}
	I0407 13:50:26.653249 1230577 main.go:141] libmachine: (bridge-056871) DBG | created network xml: 
	I0407 13:50:26.653261 1230577 main.go:141] libmachine: (bridge-056871) DBG | <network>
	I0407 13:50:26.653269 1230577 main.go:141] libmachine: (bridge-056871) DBG |   <name>mk-bridge-056871</name>
	I0407 13:50:26.653277 1230577 main.go:141] libmachine: (bridge-056871) DBG |   <dns enable='no'/>
	I0407 13:50:26.653283 1230577 main.go:141] libmachine: (bridge-056871) DBG |   
	I0407 13:50:26.653293 1230577 main.go:141] libmachine: (bridge-056871) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0407 13:50:26.653304 1230577 main.go:141] libmachine: (bridge-056871) DBG |     <dhcp>
	I0407 13:50:26.653315 1230577 main.go:141] libmachine: (bridge-056871) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0407 13:50:26.653328 1230577 main.go:141] libmachine: (bridge-056871) DBG |     </dhcp>
	I0407 13:50:26.653337 1230577 main.go:141] libmachine: (bridge-056871) DBG |   </ip>
	I0407 13:50:26.653347 1230577 main.go:141] libmachine: (bridge-056871) DBG |   
	I0407 13:50:26.653355 1230577 main.go:141] libmachine: (bridge-056871) DBG | </network>
	I0407 13:50:26.653363 1230577 main.go:141] libmachine: (bridge-056871) DBG | 
	I0407 13:50:26.659288 1230577 main.go:141] libmachine: (bridge-056871) DBG | trying to create private KVM network mk-bridge-056871 192.168.50.0/24...
	I0407 13:50:26.754740 1230577 main.go:141] libmachine: (bridge-056871) DBG | private KVM network mk-bridge-056871 192.168.50.0/24 created
	I0407 13:50:26.754786 1230577 main.go:141] libmachine: (bridge-056871) setting up store path in /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/bridge-056871 ...
	I0407 13:50:26.754812 1230577 main.go:141] libmachine: (bridge-056871) building disk image from file:///home/jenkins/minikube-integration/20602-1162386/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0407 13:50:26.754885 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:26.754809 1230645 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20602-1162386/.minikube
	I0407 13:50:26.755137 1230577 main.go:141] libmachine: (bridge-056871) Downloading /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20602-1162386/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0407 13:50:27.080306 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:27.080159 1230645 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/bridge-056871/id_rsa...
	I0407 13:50:27.470492 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:27.470358 1230645 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/bridge-056871/bridge-056871.rawdisk...
	I0407 13:50:27.470527 1230577 main.go:141] libmachine: (bridge-056871) DBG | Writing magic tar header
	I0407 13:50:27.470543 1230577 main.go:141] libmachine: (bridge-056871) DBG | Writing SSH key tar header
	I0407 13:50:27.470614 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:27.470566 1230645 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/bridge-056871 ...
	I0407 13:50:27.470742 1230577 main.go:141] libmachine: (bridge-056871) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/bridge-056871
	I0407 13:50:27.470774 1230577 main.go:141] libmachine: (bridge-056871) setting executable bit set on /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/bridge-056871 (perms=drwx------)
	I0407 13:50:27.470787 1230577 main.go:141] libmachine: (bridge-056871) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines
	I0407 13:50:27.470818 1230577 main.go:141] libmachine: (bridge-056871) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20602-1162386/.minikube
	I0407 13:50:27.470830 1230577 main.go:141] libmachine: (bridge-056871) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20602-1162386
	I0407 13:50:27.470842 1230577 main.go:141] libmachine: (bridge-056871) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0407 13:50:27.470856 1230577 main.go:141] libmachine: (bridge-056871) setting executable bit set on /home/jenkins/minikube-integration/20602-1162386/.minikube/machines (perms=drwxr-xr-x)
	I0407 13:50:27.470863 1230577 main.go:141] libmachine: (bridge-056871) DBG | checking permissions on dir: /home/jenkins
	I0407 13:50:27.470876 1230577 main.go:141] libmachine: (bridge-056871) DBG | checking permissions on dir: /home
	I0407 13:50:27.470888 1230577 main.go:141] libmachine: (bridge-056871) DBG | skipping /home - not owner
	I0407 13:50:27.470897 1230577 main.go:141] libmachine: (bridge-056871) setting executable bit set on /home/jenkins/minikube-integration/20602-1162386/.minikube (perms=drwxr-xr-x)
	I0407 13:50:27.470910 1230577 main.go:141] libmachine: (bridge-056871) setting executable bit set on /home/jenkins/minikube-integration/20602-1162386 (perms=drwxrwxr-x)
	I0407 13:50:27.470922 1230577 main.go:141] libmachine: (bridge-056871) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0407 13:50:27.470934 1230577 main.go:141] libmachine: (bridge-056871) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0407 13:50:27.470943 1230577 main.go:141] libmachine: (bridge-056871) creating domain...
	I0407 13:50:27.472089 1230577 main.go:141] libmachine: (bridge-056871) define libvirt domain using xml: 
	I0407 13:50:27.472123 1230577 main.go:141] libmachine: (bridge-056871) <domain type='kvm'>
	I0407 13:50:27.472134 1230577 main.go:141] libmachine: (bridge-056871)   <name>bridge-056871</name>
	I0407 13:50:27.472141 1230577 main.go:141] libmachine: (bridge-056871)   <memory unit='MiB'>3072</memory>
	I0407 13:50:27.472151 1230577 main.go:141] libmachine: (bridge-056871)   <vcpu>2</vcpu>
	I0407 13:50:27.472158 1230577 main.go:141] libmachine: (bridge-056871)   <features>
	I0407 13:50:27.472165 1230577 main.go:141] libmachine: (bridge-056871)     <acpi/>
	I0407 13:50:27.472178 1230577 main.go:141] libmachine: (bridge-056871)     <apic/>
	I0407 13:50:27.472186 1230577 main.go:141] libmachine: (bridge-056871)     <pae/>
	I0407 13:50:27.472193 1230577 main.go:141] libmachine: (bridge-056871)     
	I0407 13:50:27.472204 1230577 main.go:141] libmachine: (bridge-056871)   </features>
	I0407 13:50:27.472215 1230577 main.go:141] libmachine: (bridge-056871)   <cpu mode='host-passthrough'>
	I0407 13:50:27.472251 1230577 main.go:141] libmachine: (bridge-056871)   
	I0407 13:50:27.472280 1230577 main.go:141] libmachine: (bridge-056871)   </cpu>
	I0407 13:50:27.472291 1230577 main.go:141] libmachine: (bridge-056871)   <os>
	I0407 13:50:27.472305 1230577 main.go:141] libmachine: (bridge-056871)     <type>hvm</type>
	I0407 13:50:27.472318 1230577 main.go:141] libmachine: (bridge-056871)     <boot dev='cdrom'/>
	I0407 13:50:27.472325 1230577 main.go:141] libmachine: (bridge-056871)     <boot dev='hd'/>
	I0407 13:50:27.472337 1230577 main.go:141] libmachine: (bridge-056871)     <bootmenu enable='no'/>
	I0407 13:50:27.472342 1230577 main.go:141] libmachine: (bridge-056871)   </os>
	I0407 13:50:27.472347 1230577 main.go:141] libmachine: (bridge-056871)   <devices>
	I0407 13:50:27.472355 1230577 main.go:141] libmachine: (bridge-056871)     <disk type='file' device='cdrom'>
	I0407 13:50:27.472379 1230577 main.go:141] libmachine: (bridge-056871)       <source file='/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/bridge-056871/boot2docker.iso'/>
	I0407 13:50:27.472398 1230577 main.go:141] libmachine: (bridge-056871)       <target dev='hdc' bus='scsi'/>
	I0407 13:50:27.472407 1230577 main.go:141] libmachine: (bridge-056871)       <readonly/>
	I0407 13:50:27.472417 1230577 main.go:141] libmachine: (bridge-056871)     </disk>
	I0407 13:50:27.472428 1230577 main.go:141] libmachine: (bridge-056871)     <disk type='file' device='disk'>
	I0407 13:50:27.472440 1230577 main.go:141] libmachine: (bridge-056871)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0407 13:50:27.472456 1230577 main.go:141] libmachine: (bridge-056871)       <source file='/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/bridge-056871/bridge-056871.rawdisk'/>
	I0407 13:50:27.472466 1230577 main.go:141] libmachine: (bridge-056871)       <target dev='hda' bus='virtio'/>
	I0407 13:50:27.472473 1230577 main.go:141] libmachine: (bridge-056871)     </disk>
	I0407 13:50:27.472481 1230577 main.go:141] libmachine: (bridge-056871)     <interface type='network'>
	I0407 13:50:27.472494 1230577 main.go:141] libmachine: (bridge-056871)       <source network='mk-bridge-056871'/>
	I0407 13:50:27.472513 1230577 main.go:141] libmachine: (bridge-056871)       <model type='virtio'/>
	I0407 13:50:27.472525 1230577 main.go:141] libmachine: (bridge-056871)     </interface>
	I0407 13:50:27.472535 1230577 main.go:141] libmachine: (bridge-056871)     <interface type='network'>
	I0407 13:50:27.472545 1230577 main.go:141] libmachine: (bridge-056871)       <source network='default'/>
	I0407 13:50:27.472554 1230577 main.go:141] libmachine: (bridge-056871)       <model type='virtio'/>
	I0407 13:50:27.472564 1230577 main.go:141] libmachine: (bridge-056871)     </interface>
	I0407 13:50:27.472569 1230577 main.go:141] libmachine: (bridge-056871)     <serial type='pty'>
	I0407 13:50:27.472580 1230577 main.go:141] libmachine: (bridge-056871)       <target port='0'/>
	I0407 13:50:27.472593 1230577 main.go:141] libmachine: (bridge-056871)     </serial>
	I0407 13:50:27.472621 1230577 main.go:141] libmachine: (bridge-056871)     <console type='pty'>
	I0407 13:50:27.472644 1230577 main.go:141] libmachine: (bridge-056871)       <target type='serial' port='0'/>
	I0407 13:50:27.472655 1230577 main.go:141] libmachine: (bridge-056871)     </console>
	I0407 13:50:27.472669 1230577 main.go:141] libmachine: (bridge-056871)     <rng model='virtio'>
	I0407 13:50:27.472683 1230577 main.go:141] libmachine: (bridge-056871)       <backend model='random'>/dev/random</backend>
	I0407 13:50:27.472689 1230577 main.go:141] libmachine: (bridge-056871)     </rng>
	I0407 13:50:27.472699 1230577 main.go:141] libmachine: (bridge-056871)     
	I0407 13:50:27.472705 1230577 main.go:141] libmachine: (bridge-056871)     
	I0407 13:50:27.472716 1230577 main.go:141] libmachine: (bridge-056871)   </devices>
	I0407 13:50:27.472724 1230577 main.go:141] libmachine: (bridge-056871) </domain>
	I0407 13:50:27.472738 1230577 main.go:141] libmachine: (bridge-056871) 
	I0407 13:50:27.477583 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:53:dd:9f in network default
	I0407 13:50:27.478323 1230577 main.go:141] libmachine: (bridge-056871) starting domain...
	I0407 13:50:27.478342 1230577 main.go:141] libmachine: (bridge-056871) ensuring networks are active...
	I0407 13:50:27.478352 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:27.479117 1230577 main.go:141] libmachine: (bridge-056871) Ensuring network default is active
	I0407 13:50:27.479451 1230577 main.go:141] libmachine: (bridge-056871) Ensuring network mk-bridge-056871 is active
	I0407 13:50:27.479970 1230577 main.go:141] libmachine: (bridge-056871) getting domain XML...
	I0407 13:50:27.480738 1230577 main.go:141] libmachine: (bridge-056871) creating domain...
	I0407 13:50:28.949656 1230577 main.go:141] libmachine: (bridge-056871) waiting for IP...
	I0407 13:50:28.950754 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:28.951466 1230577 main.go:141] libmachine: (bridge-056871) DBG | unable to find current IP address of domain bridge-056871 in network mk-bridge-056871
	I0407 13:50:28.951521 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:28.951469 1230645 retry.go:31] will retry after 215.247092ms: waiting for domain to come up
	I0407 13:50:29.168411 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:29.169205 1230577 main.go:141] libmachine: (bridge-056871) DBG | unable to find current IP address of domain bridge-056871 in network mk-bridge-056871
	I0407 13:50:29.169291 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:29.169186 1230645 retry.go:31] will retry after 290.693734ms: waiting for domain to come up
	I0407 13:50:28.176892 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetIP
	I0407 13:50:28.180543 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:28.181100 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:flannel-056871 Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:28.181127 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:28.181509 1229086 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0407 13:50:28.185853 1229086 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:50:28.200991 1229086 kubeadm.go:883] updating cluster {Name:flannel-056871 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:flannel-056871
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.61.247 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 13:50:28.201123 1229086 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 13:50:28.201182 1229086 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 13:50:28.238677 1229086 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0407 13:50:28.238758 1229086 ssh_runner.go:195] Run: which lz4
	I0407 13:50:28.243012 1229086 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0407 13:50:28.247954 1229086 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0407 13:50:28.248000 1229086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0407 13:50:29.817006 1229086 crio.go:462] duration metric: took 1.57406554s to copy over tarball
	I0407 13:50:29.817108 1229086 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0407 13:50:28.733101 1220973 pod_ready.go:103] pod "metrics-server-f79f97bbb-m78vh" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:30.734125 1220973 pod_ready.go:103] pod "metrics-server-f79f97bbb-m78vh" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:29.462243 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:29.463004 1230577 main.go:141] libmachine: (bridge-056871) DBG | unable to find current IP address of domain bridge-056871 in network mk-bridge-056871
	I0407 13:50:29.463042 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:29.462964 1230645 retry.go:31] will retry after 467.697129ms: waiting for domain to come up
	I0407 13:50:29.932873 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:29.933567 1230577 main.go:141] libmachine: (bridge-056871) DBG | unable to find current IP address of domain bridge-056871 in network mk-bridge-056871
	I0407 13:50:29.933596 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:29.933533 1230645 retry.go:31] will retry after 535.905567ms: waiting for domain to come up
	I0407 13:50:30.471706 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:30.472379 1230577 main.go:141] libmachine: (bridge-056871) DBG | unable to find current IP address of domain bridge-056871 in network mk-bridge-056871
	I0407 13:50:30.472407 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:30.472330 1230645 retry.go:31] will retry after 618.480423ms: waiting for domain to come up
	I0407 13:50:31.092807 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:31.093788 1230577 main.go:141] libmachine: (bridge-056871) DBG | unable to find current IP address of domain bridge-056871 in network mk-bridge-056871
	I0407 13:50:31.093829 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:31.093727 1230645 retry.go:31] will retry after 725.388807ms: waiting for domain to come up
	I0407 13:50:31.821291 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:31.821911 1230577 main.go:141] libmachine: (bridge-056871) DBG | unable to find current IP address of domain bridge-056871 in network mk-bridge-056871
	I0407 13:50:31.821942 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:31.821892 1230645 retry.go:31] will retry after 775.984409ms: waiting for domain to come up
	I0407 13:50:32.600220 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:32.600842 1230577 main.go:141] libmachine: (bridge-056871) DBG | unable to find current IP address of domain bridge-056871 in network mk-bridge-056871
	I0407 13:50:32.600873 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:32.600802 1230645 retry.go:31] will retry after 962.969903ms: waiting for domain to come up
	I0407 13:50:33.565304 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:33.565921 1230577 main.go:141] libmachine: (bridge-056871) DBG | unable to find current IP address of domain bridge-056871 in network mk-bridge-056871
	I0407 13:50:33.565973 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:33.565873 1230645 retry.go:31] will retry after 1.612514856s: waiting for domain to come up
	I0407 13:50:32.494439 1229086 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.677295945s)
	I0407 13:50:32.494478 1229086 crio.go:469] duration metric: took 2.677420142s to extract the tarball
	I0407 13:50:32.494489 1229086 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0407 13:50:32.535794 1229086 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 13:50:32.583475 1229086 crio.go:514] all images are preloaded for cri-o runtime.
	I0407 13:50:32.583510 1229086 cache_images.go:84] Images are preloaded, skipping loading
	I0407 13:50:32.583522 1229086 kubeadm.go:934] updating node { 192.168.61.247 8443 v1.32.2 crio true true} ...
	I0407 13:50:32.583650 1229086 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-056871 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.247
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:flannel-056871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0407 13:50:32.583742 1229086 ssh_runner.go:195] Run: crio config
	I0407 13:50:32.640040 1229086 cni.go:84] Creating CNI manager for "flannel"
	I0407 13:50:32.640079 1229086 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0407 13:50:32.640116 1229086 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.247 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-056871 NodeName:flannel-056871 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.247"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.247 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0407 13:50:32.640292 1229086 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.247
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-056871"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.247"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.247"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 13:50:32.640374 1229086 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0407 13:50:32.651316 1229086 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 13:50:32.651398 1229086 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 13:50:32.662004 1229086 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0407 13:50:32.681383 1229086 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 13:50:32.699464 1229086 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I0407 13:50:32.718073 1229086 ssh_runner.go:195] Run: grep 192.168.61.247	control-plane.minikube.internal$ /etc/hosts
	I0407 13:50:32.723813 1229086 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.247	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:50:32.742471 1229086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:50:32.882766 1229086 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:50:32.902076 1229086 certs.go:68] Setting up /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871 for IP: 192.168.61.247
	I0407 13:50:32.902105 1229086 certs.go:194] generating shared ca certs ...
	I0407 13:50:32.902124 1229086 certs.go:226] acquiring lock for ca certs: {Name:mk8e89191fca7f2111bdd08c345368f593b0d5d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:50:32.902321 1229086 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.key
	I0407 13:50:32.902375 1229086 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.key
	I0407 13:50:32.902390 1229086 certs.go:256] generating profile certs ...
	I0407 13:50:32.902467 1229086 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/client.key
	I0407 13:50:32.902487 1229086 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/client.crt with IP's: []
	I0407 13:50:33.569949 1229086 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/client.crt ...
	I0407 13:50:33.569987 1229086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/client.crt: {Name:mk4805221c36a2cca723bfa233dd774354e307a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:50:33.570209 1229086 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/client.key ...
	I0407 13:50:33.570231 1229086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/client.key: {Name:mk818a09a0e5f7f407383d67ffb02583991c1838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:50:33.570358 1229086 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/apiserver.key.a08cb125
	I0407 13:50:33.570378 1229086 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/apiserver.crt.a08cb125 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.247]
	I0407 13:50:33.921092 1229086 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/apiserver.crt.a08cb125 ...
	I0407 13:50:33.921130 1229086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/apiserver.crt.a08cb125: {Name:mk31775d71650842d3bfbf6897e603ba9bba8d7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:50:33.921346 1229086 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/apiserver.key.a08cb125 ...
	I0407 13:50:33.921366 1229086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/apiserver.key.a08cb125: {Name:mk0ccec011f5454d91ba41eaba4bfc3e7912f0e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:50:33.921468 1229086 certs.go:381] copying /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/apiserver.crt.a08cb125 -> /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/apiserver.crt
	I0407 13:50:33.921568 1229086 certs.go:385] copying /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/apiserver.key.a08cb125 -> /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/apiserver.key
	I0407 13:50:33.921658 1229086 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/proxy-client.key
	I0407 13:50:33.921682 1229086 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/proxy-client.crt with IP's: []
	I0407 13:50:34.039930 1229086 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/proxy-client.crt ...
	I0407 13:50:34.039969 1229086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/proxy-client.crt: {Name:mk7847287bc2ac1a4785e1fb0e3cdcf907896c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:50:34.040196 1229086 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/proxy-client.key ...
	I0407 13:50:34.040231 1229086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/proxy-client.key: {Name:mk31e01ebd2604ee7baf20e81e06b392f9ab1ffa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:50:34.040480 1229086 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716.pem (1338 bytes)
	W0407 13:50:34.040523 1229086 certs.go:480] ignoring /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716_empty.pem, impossibly tiny 0 bytes
	I0407 13:50:34.040530 1229086 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem (1675 bytes)
	I0407 13:50:34.040553 1229086 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem (1078 bytes)
	I0407 13:50:34.040576 1229086 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem (1123 bytes)
	I0407 13:50:34.040600 1229086 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem (1675 bytes)
	I0407 13:50:34.040641 1229086 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem (1708 bytes)
	I0407 13:50:34.041300 1229086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 13:50:34.086199 1229086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 13:50:34.117611 1229086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 13:50:34.144602 1229086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0407 13:50:34.172664 1229086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0407 13:50:34.201384 1229086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0407 13:50:34.230768 1229086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 13:50:34.257946 1229086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0407 13:50:34.284148 1229086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 13:50:34.309784 1229086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716.pem --> /usr/share/ca-certificates/1169716.pem (1338 bytes)
	I0407 13:50:34.337736 1229086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem --> /usr/share/ca-certificates/11697162.pem (1708 bytes)
	I0407 13:50:34.368444 1229086 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 13:50:34.386775 1229086 ssh_runner.go:195] Run: openssl version
	I0407 13:50:34.392660 1229086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11697162.pem && ln -fs /usr/share/ca-certificates/11697162.pem /etc/ssl/certs/11697162.pem"
	I0407 13:50:34.404605 1229086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11697162.pem
	I0407 13:50:34.409966 1229086 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 12:22 /usr/share/ca-certificates/11697162.pem
	I0407 13:50:34.410059 1229086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11697162.pem
	I0407 13:50:34.417192 1229086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11697162.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 13:50:34.434247 1229086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 13:50:34.452498 1229086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:50:34.461061 1229086 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:14 /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:50:34.461145 1229086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:50:34.469236 1229086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 13:50:34.486757 1229086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1169716.pem && ln -fs /usr/share/ca-certificates/1169716.pem /etc/ssl/certs/1169716.pem"
	I0407 13:50:34.505841 1229086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1169716.pem
	I0407 13:50:34.512500 1229086 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 12:22 /usr/share/ca-certificates/1169716.pem
	I0407 13:50:34.512575 1229086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1169716.pem
	I0407 13:50:34.519355 1229086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1169716.pem /etc/ssl/certs/51391683.0"
	I0407 13:50:34.531934 1229086 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 13:50:34.537134 1229086 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0407 13:50:34.537220 1229086 kubeadm.go:392] StartCluster: {Name:flannel-056871 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:flannel-056871 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.61.247 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:50:34.537316 1229086 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0407 13:50:34.537378 1229086 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0407 13:50:34.577926 1229086 cri.go:89] found id: ""
	I0407 13:50:34.578004 1229086 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0407 13:50:34.588664 1229086 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0407 13:50:34.601841 1229086 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 13:50:34.614425 1229086 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 13:50:34.614447 1229086 kubeadm.go:157] found existing configuration files:
	
	I0407 13:50:34.614509 1229086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 13:50:34.626757 1229086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 13:50:34.626835 1229086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 13:50:34.636946 1229086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 13:50:34.646731 1229086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 13:50:34.646810 1229086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 13:50:34.656940 1229086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 13:50:34.666574 1229086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 13:50:34.666657 1229086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 13:50:34.676694 1229086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 13:50:34.687739 1229086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 13:50:34.687802 1229086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 13:50:34.698275 1229086 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0407 13:50:34.871264 1229086 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0407 13:50:33.233049 1220973 pod_ready.go:103] pod "metrics-server-f79f97bbb-m78vh" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:35.233735 1220973 pod_ready.go:103] pod "metrics-server-f79f97bbb-m78vh" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:37.233866 1220973 pod_ready.go:103] pod "metrics-server-f79f97bbb-m78vh" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:35.180805 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:35.181459 1230577 main.go:141] libmachine: (bridge-056871) DBG | unable to find current IP address of domain bridge-056871 in network mk-bridge-056871
	I0407 13:50:35.181495 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:35.181447 1230645 retry.go:31] will retry after 1.757890507s: waiting for domain to come up
	I0407 13:50:36.941039 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:36.942199 1230577 main.go:141] libmachine: (bridge-056871) DBG | unable to find current IP address of domain bridge-056871 in network mk-bridge-056871
	I0407 13:50:36.942252 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:36.942113 1230645 retry.go:31] will retry after 2.027504729s: waiting for domain to come up
	I0407 13:50:38.970898 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:38.971484 1230577 main.go:141] libmachine: (bridge-056871) DBG | unable to find current IP address of domain bridge-056871 in network mk-bridge-056871
	I0407 13:50:38.971524 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:38.971472 1230645 retry.go:31] will retry after 2.641457601s: waiting for domain to come up
	I0407 13:50:39.734200 1220973 pod_ready.go:103] pod "metrics-server-f79f97bbb-m78vh" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:42.232999 1220973 pod_ready.go:103] pod "metrics-server-f79f97bbb-m78vh" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:41.614467 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:41.615056 1230577 main.go:141] libmachine: (bridge-056871) DBG | unable to find current IP address of domain bridge-056871 in network mk-bridge-056871
	I0407 13:50:41.615081 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:41.615017 1230645 retry.go:31] will retry after 2.736353363s: waiting for domain to come up
	I0407 13:50:45.532979 1229086 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0407 13:50:45.533086 1229086 kubeadm.go:310] [preflight] Running pre-flight checks
	I0407 13:50:45.533212 1229086 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0407 13:50:45.533368 1229086 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0407 13:50:45.533481 1229086 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0407 13:50:45.533565 1229086 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 13:50:45.535389 1229086 out.go:235]   - Generating certificates and keys ...
	I0407 13:50:45.535475 1229086 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0407 13:50:45.535564 1229086 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0407 13:50:45.535678 1229086 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0407 13:50:45.535769 1229086 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0407 13:50:45.535860 1229086 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0407 13:50:45.535934 1229086 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0407 13:50:45.536016 1229086 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0407 13:50:45.536156 1229086 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-056871 localhost] and IPs [192.168.61.247 127.0.0.1 ::1]
	I0407 13:50:45.536233 1229086 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0407 13:50:45.536389 1229086 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-056871 localhost] and IPs [192.168.61.247 127.0.0.1 ::1]
	I0407 13:50:45.536478 1229086 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0407 13:50:45.536574 1229086 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0407 13:50:45.536646 1229086 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0407 13:50:45.536731 1229086 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 13:50:45.536804 1229086 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 13:50:45.536887 1229086 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0407 13:50:45.536948 1229086 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 13:50:45.537028 1229086 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 13:50:45.537076 1229086 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 13:50:45.537178 1229086 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 13:50:45.537281 1229086 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 13:50:45.539118 1229086 out.go:235]   - Booting up control plane ...
	I0407 13:50:45.539267 1229086 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 13:50:45.539349 1229086 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 13:50:45.539411 1229086 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 13:50:45.539507 1229086 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 13:50:45.539580 1229086 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 13:50:45.539635 1229086 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0407 13:50:45.539818 1229086 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0407 13:50:45.539949 1229086 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0407 13:50:45.540025 1229086 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.784059ms
	I0407 13:50:45.540087 1229086 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0407 13:50:45.540140 1229086 kubeadm.go:310] [api-check] The API server is healthy after 5.502559723s
	I0407 13:50:45.540233 1229086 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0407 13:50:45.540348 1229086 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0407 13:50:45.540399 1229086 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0407 13:50:45.540552 1229086 kubeadm.go:310] [mark-control-plane] Marking the node flannel-056871 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0407 13:50:45.540608 1229086 kubeadm.go:310] [bootstrap-token] Using token: t9k2ad.s7t8ejujlbhlgahm
	I0407 13:50:45.542178 1229086 out.go:235]   - Configuring RBAC rules ...
	I0407 13:50:45.542292 1229086 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0407 13:50:45.542396 1229086 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0407 13:50:45.542597 1229086 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0407 13:50:45.542789 1229086 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0407 13:50:45.542938 1229086 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0407 13:50:45.543010 1229086 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0407 13:50:45.543104 1229086 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0407 13:50:45.543142 1229086 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0407 13:50:45.543177 1229086 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0407 13:50:45.543183 1229086 kubeadm.go:310] 
	I0407 13:50:45.543232 1229086 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0407 13:50:45.543238 1229086 kubeadm.go:310] 
	I0407 13:50:45.543318 1229086 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0407 13:50:45.543324 1229086 kubeadm.go:310] 
	I0407 13:50:45.543362 1229086 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0407 13:50:45.543421 1229086 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0407 13:50:45.543465 1229086 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0407 13:50:45.543475 1229086 kubeadm.go:310] 
	I0407 13:50:45.543523 1229086 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0407 13:50:45.543535 1229086 kubeadm.go:310] 
	I0407 13:50:45.543581 1229086 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0407 13:50:45.543585 1229086 kubeadm.go:310] 
	I0407 13:50:45.543627 1229086 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0407 13:50:45.543696 1229086 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0407 13:50:45.543764 1229086 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0407 13:50:45.543770 1229086 kubeadm.go:310] 
	I0407 13:50:45.543839 1229086 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0407 13:50:45.543900 1229086 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0407 13:50:45.543906 1229086 kubeadm.go:310] 
	I0407 13:50:45.543972 1229086 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token t9k2ad.s7t8ejujlbhlgahm \
	I0407 13:50:45.544074 1229086 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:001387253bb6e222db2af12e9fcbe5a1c3ee2a6f53970e58b5a7d017a3fc6618 \
	I0407 13:50:45.544103 1229086 kubeadm.go:310] 	--control-plane 
	I0407 13:50:45.544108 1229086 kubeadm.go:310] 
	I0407 13:50:45.544185 1229086 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0407 13:50:45.544193 1229086 kubeadm.go:310] 
	I0407 13:50:45.544266 1229086 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token t9k2ad.s7t8ejujlbhlgahm \
	I0407 13:50:45.544372 1229086 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:001387253bb6e222db2af12e9fcbe5a1c3ee2a6f53970e58b5a7d017a3fc6618 
	I0407 13:50:45.544403 1229086 cni.go:84] Creating CNI manager for "flannel"
	I0407 13:50:45.546812 1229086 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0407 13:50:45.548070 1229086 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0407 13:50:45.554148 1229086 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0407 13:50:45.554177 1229086 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4348 bytes)
	I0407 13:50:45.573799 1229086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0407 13:50:46.037107 1229086 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0407 13:50:46.037331 1229086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:50:46.037333 1229086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-056871 minikube.k8s.io/updated_at=2025_04_07T13_50_46_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=33e6edc58d2014d70e908473920ef4ac8eae1e43 minikube.k8s.io/name=flannel-056871 minikube.k8s.io/primary=true
	I0407 13:50:46.064293 1229086 ops.go:34] apiserver oom_adj: -16
	I0407 13:50:44.733521 1220973 pod_ready.go:103] pod "metrics-server-f79f97bbb-m78vh" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:47.234709 1220973 pod_ready.go:103] pod "metrics-server-f79f97bbb-m78vh" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:44.353055 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:44.353642 1230577 main.go:141] libmachine: (bridge-056871) DBG | unable to find current IP address of domain bridge-056871 in network mk-bridge-056871
	I0407 13:50:44.353675 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:44.353588 1230645 retry.go:31] will retry after 5.250336716s: waiting for domain to come up
	I0407 13:50:46.201152 1229086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:50:46.701353 1229086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:50:47.201423 1229086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:50:47.701873 1229086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:50:48.201616 1229086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:50:48.701363 1229086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:50:49.202051 1229086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:50:49.702005 1229086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:50:49.851095 1229086 kubeadm.go:1113] duration metric: took 3.81385601s to wait for elevateKubeSystemPrivileges
	I0407 13:50:49.851152 1229086 kubeadm.go:394] duration metric: took 15.313943562s to StartCluster
	I0407 13:50:49.851181 1229086 settings.go:142] acquiring lock: {Name:mk19c4dc5d7992642f3fe5ca0bdb3ea65af01b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:50:49.851301 1229086 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20602-1162386/kubeconfig
	I0407 13:50:49.852818 1229086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/kubeconfig: {Name:mk712863958f7dbf2601dd82dc9ca7bea42ef42f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:50:49.853195 1229086 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0407 13:50:49.853204 1229086 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.247 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0407 13:50:49.853362 1229086 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0407 13:50:49.853468 1229086 addons.go:69] Setting storage-provisioner=true in profile "flannel-056871"
	I0407 13:50:49.853489 1229086 addons.go:238] Setting addon storage-provisioner=true in "flannel-056871"
	I0407 13:50:49.853508 1229086 addons.go:69] Setting default-storageclass=true in profile "flannel-056871"
	I0407 13:50:49.853533 1229086 host.go:66] Checking if "flannel-056871" exists ...
	I0407 13:50:49.853564 1229086 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-056871"
	I0407 13:50:49.853590 1229086 config.go:182] Loaded profile config "flannel-056871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:50:49.854106 1229086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:50:49.854150 1229086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:50:49.854146 1229086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:50:49.854285 1229086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:50:49.854855 1229086 out.go:177] * Verifying Kubernetes components...
	I0407 13:50:49.856724 1229086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:50:49.875017 1229086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39479
	I0407 13:50:49.875166 1229086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42643
	I0407 13:50:49.875729 1229086 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:50:49.875743 1229086 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:50:49.876373 1229086 main.go:141] libmachine: Using API Version  1
	I0407 13:50:49.876375 1229086 main.go:141] libmachine: Using API Version  1
	I0407 13:50:49.876409 1229086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:50:49.876425 1229086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:50:49.876939 1229086 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:50:49.877001 1229086 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:50:49.877237 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetState
	I0407 13:50:49.877651 1229086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:50:49.877698 1229086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:50:49.882004 1229086 addons.go:238] Setting addon default-storageclass=true in "flannel-056871"
	I0407 13:50:49.882055 1229086 host.go:66] Checking if "flannel-056871" exists ...
	I0407 13:50:49.882496 1229086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:50:49.882542 1229086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:50:49.905456 1229086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34887
	I0407 13:50:49.906078 1229086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35353
	I0407 13:50:49.906328 1229086 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:50:49.906588 1229086 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:50:49.906958 1229086 main.go:141] libmachine: Using API Version  1
	I0407 13:50:49.906989 1229086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:50:49.907436 1229086 main.go:141] libmachine: Using API Version  1
	I0407 13:50:49.907453 1229086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:50:49.907514 1229086 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:50:49.907756 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetState
	I0407 13:50:49.907844 1229086 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:50:49.908507 1229086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:50:49.908572 1229086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:50:49.909997 1229086 main.go:141] libmachine: (flannel-056871) Calling .DriverName
	I0407 13:50:49.912348 1229086 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 13:50:49.915139 1229086 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 13:50:49.915178 1229086 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0407 13:50:49.915221 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHHostname
	I0407 13:50:49.920532 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:49.920862 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:flannel-056871 Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:49.920884 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:49.921348 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHPort
	I0407 13:50:49.921619 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHKeyPath
	I0407 13:50:49.921850 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHUsername
	I0407 13:50:49.922135 1229086 sshutil.go:53] new ssh client: &{IP:192.168.61.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/flannel-056871/id_rsa Username:docker}
	I0407 13:50:49.928730 1229086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42571
	I0407 13:50:49.929289 1229086 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:50:49.929795 1229086 main.go:141] libmachine: Using API Version  1
	I0407 13:50:49.929826 1229086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:50:49.930205 1229086 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:50:49.930435 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetState
	I0407 13:50:49.932734 1229086 main.go:141] libmachine: (flannel-056871) Calling .DriverName
	I0407 13:50:49.933052 1229086 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0407 13:50:49.933074 1229086 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0407 13:50:49.933099 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHHostname
	I0407 13:50:49.936791 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:49.937295 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:flannel-056871 Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:49.937324 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:49.937512 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHPort
	I0407 13:50:49.937814 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHKeyPath
	I0407 13:50:49.938069 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHUsername
	I0407 13:50:49.938248 1229086 sshutil.go:53] new ssh client: &{IP:192.168.61.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/flannel-056871/id_rsa Username:docker}
	I0407 13:50:50.214467 1229086 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:50:50.214570 1229086 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0407 13:50:50.253276 1229086 node_ready.go:35] waiting up to 15m0s for node "flannel-056871" to be "Ready" ...
	I0407 13:50:50.335699 1229086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0407 13:50:50.467947 1229086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 13:50:50.731547 1229086 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0407 13:50:50.733104 1229086 main.go:141] libmachine: Making call to close driver server
	I0407 13:50:50.733137 1229086 main.go:141] libmachine: (flannel-056871) Calling .Close
	I0407 13:50:50.733577 1229086 main.go:141] libmachine: (flannel-056871) DBG | Closing plugin on server side
	I0407 13:50:50.733629 1229086 main.go:141] libmachine: Successfully made call to close driver server
	I0407 13:50:50.733642 1229086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 13:50:50.733652 1229086 main.go:141] libmachine: Making call to close driver server
	I0407 13:50:50.733664 1229086 main.go:141] libmachine: (flannel-056871) Calling .Close
	I0407 13:50:50.733988 1229086 main.go:141] libmachine: Successfully made call to close driver server
	I0407 13:50:50.734009 1229086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 13:50:50.734028 1229086 main.go:141] libmachine: (flannel-056871) DBG | Closing plugin on server side
	I0407 13:50:50.741077 1229086 main.go:141] libmachine: Making call to close driver server
	I0407 13:50:50.741122 1229086 main.go:141] libmachine: (flannel-056871) Calling .Close
	I0407 13:50:50.741511 1229086 main.go:141] libmachine: Successfully made call to close driver server
	I0407 13:50:50.741530 1229086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 13:50:50.741565 1229086 main.go:141] libmachine: (flannel-056871) DBG | Closing plugin on server side
	I0407 13:50:51.185842 1229086 main.go:141] libmachine: Making call to close driver server
	I0407 13:50:51.185879 1229086 main.go:141] libmachine: (flannel-056871) Calling .Close
	I0407 13:50:51.186216 1229086 main.go:141] libmachine: Successfully made call to close driver server
	I0407 13:50:51.186239 1229086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 13:50:51.186248 1229086 main.go:141] libmachine: Making call to close driver server
	I0407 13:50:51.186255 1229086 main.go:141] libmachine: (flannel-056871) Calling .Close
	I0407 13:50:51.186632 1229086 main.go:141] libmachine: (flannel-056871) DBG | Closing plugin on server side
	I0407 13:50:51.186634 1229086 main.go:141] libmachine: Successfully made call to close driver server
	I0407 13:50:51.186660 1229086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 13:50:51.188142 1229086 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0407 13:50:49.733551 1220973 pod_ready.go:103] pod "metrics-server-f79f97bbb-m78vh" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:52.231923 1220973 pod_ready.go:103] pod "metrics-server-f79f97bbb-m78vh" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:49.607206 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:49.608068 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has current primary IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:49.608100 1230577 main.go:141] libmachine: (bridge-056871) found domain IP: 192.168.50.60
	I0407 13:50:49.608113 1230577 main.go:141] libmachine: (bridge-056871) reserving static IP address...
	I0407 13:50:49.608627 1230577 main.go:141] libmachine: (bridge-056871) DBG | unable to find host DHCP lease matching {name: "bridge-056871", mac: "52:54:00:d9:a5:38", ip: "192.168.50.60"} in network mk-bridge-056871
	I0407 13:50:49.733945 1230577 main.go:141] libmachine: (bridge-056871) DBG | Getting to WaitForSSH function...
	I0407 13:50:49.733980 1230577 main.go:141] libmachine: (bridge-056871) reserved static IP address 192.168.50.60 for domain bridge-056871
	I0407 13:50:49.734003 1230577 main.go:141] libmachine: (bridge-056871) waiting for SSH...
	I0407 13:50:49.737179 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:49.737672 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d9:a5:38}
	I0407 13:50:49.737721 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:49.737907 1230577 main.go:141] libmachine: (bridge-056871) DBG | Using SSH client type: external
	I0407 13:50:49.737938 1230577 main.go:141] libmachine: (bridge-056871) DBG | Using SSH private key: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/bridge-056871/id_rsa (-rw-------)
	I0407 13:50:49.737990 1230577 main.go:141] libmachine: (bridge-056871) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.60 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/bridge-056871/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0407 13:50:49.738003 1230577 main.go:141] libmachine: (bridge-056871) DBG | About to run SSH command:
	I0407 13:50:49.738018 1230577 main.go:141] libmachine: (bridge-056871) DBG | exit 0
	I0407 13:50:49.871623 1230577 main.go:141] libmachine: (bridge-056871) DBG | SSH cmd err, output: <nil>: 
	I0407 13:50:49.872173 1230577 main.go:141] libmachine: (bridge-056871) KVM machine creation complete
	I0407 13:50:49.872561 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetConfigRaw
	I0407 13:50:49.873430 1230577 main.go:141] libmachine: (bridge-056871) Calling .DriverName
	I0407 13:50:49.873992 1230577 main.go:141] libmachine: (bridge-056871) Calling .DriverName
	I0407 13:50:49.874580 1230577 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0407 13:50:49.874607 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetState
	I0407 13:50:49.876968 1230577 main.go:141] libmachine: Detecting operating system of created instance...
	I0407 13:50:49.876990 1230577 main.go:141] libmachine: Waiting for SSH to be available...
	I0407 13:50:49.876998 1230577 main.go:141] libmachine: Getting to WaitForSSH function...
	I0407 13:50:49.877008 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHHostname
	I0407 13:50:49.881356 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:49.881976 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:bridge-056871 Clientid:01:52:54:00:d9:a5:38}
	I0407 13:50:49.882011 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:49.882276 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHPort
	I0407 13:50:49.882470 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHKeyPath
	I0407 13:50:49.882640 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHKeyPath
	I0407 13:50:49.882737 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHUsername
	I0407 13:50:49.882895 1230577 main.go:141] libmachine: Using SSH client type: native
	I0407 13:50:49.883159 1230577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0407 13:50:49.883175 1230577 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0407 13:50:50.013571 1230577 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 13:50:50.013597 1230577 main.go:141] libmachine: Detecting the provisioner...
	I0407 13:50:50.013606 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHHostname
	I0407 13:50:50.017589 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:50.018237 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:bridge-056871 Clientid:01:52:54:00:d9:a5:38}
	I0407 13:50:50.018288 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:50.018490 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHPort
	I0407 13:50:50.018885 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHKeyPath
	I0407 13:50:50.019181 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHKeyPath
	I0407 13:50:50.019392 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHUsername
	I0407 13:50:50.019621 1230577 main.go:141] libmachine: Using SSH client type: native
	I0407 13:50:50.020026 1230577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0407 13:50:50.020047 1230577 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0407 13:50:50.148346 1230577 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0407 13:50:50.148523 1230577 main.go:141] libmachine: found compatible host: buildroot
	I0407 13:50:50.148541 1230577 main.go:141] libmachine: Provisioning with buildroot...
	I0407 13:50:50.148552 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetMachineName
	I0407 13:50:50.148891 1230577 buildroot.go:166] provisioning hostname "bridge-056871"
	I0407 13:50:50.148924 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetMachineName
	I0407 13:50:50.149180 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHHostname
	I0407 13:50:50.153622 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:50.154168 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:bridge-056871 Clientid:01:52:54:00:d9:a5:38}
	I0407 13:50:50.154202 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:50.154423 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHPort
	I0407 13:50:50.154840 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHKeyPath
	I0407 13:50:50.155099 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHKeyPath
	I0407 13:50:50.155343 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHUsername
	I0407 13:50:50.155598 1230577 main.go:141] libmachine: Using SSH client type: native
	I0407 13:50:50.155917 1230577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0407 13:50:50.155939 1230577 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-056871 && echo "bridge-056871" | sudo tee /etc/hostname
	I0407 13:50:50.307962 1230577 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-056871
	
	I0407 13:50:50.308108 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHHostname
	I0407 13:50:50.312570 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:50.313202 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:bridge-056871 Clientid:01:52:54:00:d9:a5:38}
	I0407 13:50:50.313255 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:50.313769 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHPort
	I0407 13:50:50.314025 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHKeyPath
	I0407 13:50:50.314284 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHKeyPath
	I0407 13:50:50.314527 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHUsername
	I0407 13:50:50.314847 1230577 main.go:141] libmachine: Using SSH client type: native
	I0407 13:50:50.315206 1230577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0407 13:50:50.315237 1230577 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-056871' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-056871/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-056871' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 13:50:50.449379 1230577 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 13:50:50.449450 1230577 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20602-1162386/.minikube CaCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20602-1162386/.minikube}
	I0407 13:50:50.449489 1230577 buildroot.go:174] setting up certificates
	I0407 13:50:50.449508 1230577 provision.go:84] configureAuth start
	I0407 13:50:50.449524 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetMachineName
	I0407 13:50:50.450144 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetIP
	I0407 13:50:50.455450 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:50.456299 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:bridge-056871 Clientid:01:52:54:00:d9:a5:38}
	I0407 13:50:50.456349 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:50.456694 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHHostname
	I0407 13:50:50.460806 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:50.461439 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:bridge-056871 Clientid:01:52:54:00:d9:a5:38}
	I0407 13:50:50.461465 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:50.461845 1230577 provision.go:143] copyHostCerts
	I0407 13:50:50.461922 1230577 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem, removing ...
	I0407 13:50:50.461946 1230577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem
	I0407 13:50:50.462008 1230577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem (1123 bytes)
	I0407 13:50:50.462133 1230577 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem, removing ...
	I0407 13:50:50.462146 1230577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem
	I0407 13:50:50.462169 1230577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem (1675 bytes)
	I0407 13:50:50.462266 1230577 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem, removing ...
	I0407 13:50:50.462279 1230577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem
	I0407 13:50:50.462310 1230577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem (1078 bytes)
	I0407 13:50:50.462399 1230577 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem org=jenkins.bridge-056871 san=[127.0.0.1 192.168.50.60 bridge-056871 localhost minikube]
	I0407 13:50:50.593520 1230577 provision.go:177] copyRemoteCerts
	I0407 13:50:50.593592 1230577 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 13:50:50.593620 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHHostname
	I0407 13:50:50.597459 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:50.598006 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:bridge-056871 Clientid:01:52:54:00:d9:a5:38}
	I0407 13:50:50.598046 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:50.598295 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHPort
	I0407 13:50:50.598543 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHKeyPath
	I0407 13:50:50.598774 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHUsername
	I0407 13:50:50.598944 1230577 sshutil.go:53] new ssh client: &{IP:192.168.50.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/bridge-056871/id_rsa Username:docker}
	I0407 13:50:50.689418 1230577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 13:50:50.722043 1230577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0407 13:50:50.757374 1230577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0407 13:50:50.789497 1230577 provision.go:87] duration metric: took 339.97034ms to configureAuth
	I0407 13:50:50.789540 1230577 buildroot.go:189] setting minikube options for container-runtime
	I0407 13:50:50.789789 1230577 config.go:182] Loaded profile config "bridge-056871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:50:50.789890 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHHostname
	I0407 13:50:50.793663 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:50.794168 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:bridge-056871 Clientid:01:52:54:00:d9:a5:38}
	I0407 13:50:50.794207 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:50.794531 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHPort
	I0407 13:50:50.794759 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHKeyPath
	I0407 13:50:50.794949 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHKeyPath
	I0407 13:50:50.795114 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHUsername
	I0407 13:50:50.795319 1230577 main.go:141] libmachine: Using SSH client type: native
	I0407 13:50:50.795557 1230577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0407 13:50:50.795574 1230577 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0407 13:50:51.058392 1230577 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0407 13:50:51.058431 1230577 main.go:141] libmachine: Checking connection to Docker...
	I0407 13:50:51.058443 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetURL
	I0407 13:50:51.060057 1230577 main.go:141] libmachine: (bridge-056871) DBG | using libvirt version 6000000
	I0407 13:50:51.063055 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:51.063423 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:bridge-056871 Clientid:01:52:54:00:d9:a5:38}
	I0407 13:50:51.063463 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:51.063623 1230577 main.go:141] libmachine: Docker is up and running!
	I0407 13:50:51.063643 1230577 main.go:141] libmachine: Reticulating splines...
	I0407 13:50:51.063652 1230577 client.go:171] duration metric: took 24.415097468s to LocalClient.Create
	I0407 13:50:51.063680 1230577 start.go:167] duration metric: took 24.415165779s to libmachine.API.Create "bridge-056871"
	I0407 13:50:51.063694 1230577 start.go:293] postStartSetup for "bridge-056871" (driver="kvm2")
	I0407 13:50:51.063705 1230577 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 13:50:51.063725 1230577 main.go:141] libmachine: (bridge-056871) Calling .DriverName
	I0407 13:50:51.064040 1230577 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 13:50:51.064068 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHHostname
	I0407 13:50:51.066899 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:51.067209 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:bridge-056871 Clientid:01:52:54:00:d9:a5:38}
	I0407 13:50:51.067244 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:51.067387 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHPort
	I0407 13:50:51.067586 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHKeyPath
	I0407 13:50:51.067750 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHUsername
	I0407 13:50:51.067884 1230577 sshutil.go:53] new ssh client: &{IP:192.168.50.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/bridge-056871/id_rsa Username:docker}
	I0407 13:50:51.163321 1230577 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 13:50:51.168918 1230577 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 13:50:51.168960 1230577 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-1162386/.minikube/addons for local assets ...
	I0407 13:50:51.169049 1230577 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-1162386/.minikube/files for local assets ...
	I0407 13:50:51.169147 1230577 filesync.go:149] local asset: /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem -> 11697162.pem in /etc/ssl/certs
	I0407 13:50:51.169246 1230577 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 13:50:51.182011 1230577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem --> /etc/ssl/certs/11697162.pem (1708 bytes)
	I0407 13:50:51.215173 1230577 start.go:296] duration metric: took 151.464024ms for postStartSetup
	I0407 13:50:51.215285 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetConfigRaw
	I0407 13:50:51.216295 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetIP
	I0407 13:50:51.219915 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:51.220457 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:bridge-056871 Clientid:01:52:54:00:d9:a5:38}
	I0407 13:50:51.220492 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:51.220853 1230577 profile.go:143] Saving config to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/config.json ...
	I0407 13:50:51.221086 1230577 start.go:128] duration metric: took 24.597995408s to createHost
	I0407 13:50:51.221115 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHHostname
	I0407 13:50:51.224196 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:51.224690 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:bridge-056871 Clientid:01:52:54:00:d9:a5:38}
	I0407 13:50:51.224724 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:51.224909 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHPort
	I0407 13:50:51.225153 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHKeyPath
	I0407 13:50:51.225359 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHKeyPath
	I0407 13:50:51.225593 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHUsername
	I0407 13:50:51.225819 1230577 main.go:141] libmachine: Using SSH client type: native
	I0407 13:50:51.226057 1230577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0407 13:50:51.226070 1230577 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0407 13:50:51.355321 1230577 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744033851.327638706
	
	I0407 13:50:51.355352 1230577 fix.go:216] guest clock: 1744033851.327638706
	I0407 13:50:51.355363 1230577 fix.go:229] Guest: 2025-04-07 13:50:51.327638706 +0000 UTC Remote: 2025-04-07 13:50:51.221101199 +0000 UTC m=+26.995589901 (delta=106.537507ms)
	I0407 13:50:51.355411 1230577 fix.go:200] guest clock delta is within tolerance: 106.537507ms
	I0407 13:50:51.355419 1230577 start.go:83] releasing machines lock for "bridge-056871", held for 24.732580363s
	I0407 13:50:51.355448 1230577 main.go:141] libmachine: (bridge-056871) Calling .DriverName
	I0407 13:50:51.355762 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetIP
	I0407 13:50:51.358759 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:51.359218 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:bridge-056871 Clientid:01:52:54:00:d9:a5:38}
	I0407 13:50:51.359247 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:51.359537 1230577 main.go:141] libmachine: (bridge-056871) Calling .DriverName
	I0407 13:50:51.360286 1230577 main.go:141] libmachine: (bridge-056871) Calling .DriverName
	I0407 13:50:51.360561 1230577 main.go:141] libmachine: (bridge-056871) Calling .DriverName
	I0407 13:50:51.360655 1230577 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0407 13:50:51.360707 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHHostname
	I0407 13:50:51.360847 1230577 ssh_runner.go:195] Run: cat /version.json
	I0407 13:50:51.360878 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHHostname
	I0407 13:50:51.363825 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:51.364425 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:bridge-056871 Clientid:01:52:54:00:d9:a5:38}
	I0407 13:50:51.364462 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:51.364487 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:51.364681 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHPort
	I0407 13:50:51.364990 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHKeyPath
	I0407 13:50:51.365112 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:bridge-056871 Clientid:01:52:54:00:d9:a5:38}
	I0407 13:50:51.365141 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:51.365196 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHUsername
	I0407 13:50:51.365461 1230577 sshutil.go:53] new ssh client: &{IP:192.168.50.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/bridge-056871/id_rsa Username:docker}
	I0407 13:50:51.365537 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHPort
	I0407 13:50:51.365764 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHKeyPath
	I0407 13:50:51.365992 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHUsername
	I0407 13:50:51.366203 1230577 sshutil.go:53] new ssh client: &{IP:192.168.50.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/bridge-056871/id_rsa Username:docker}
	I0407 13:50:51.470790 1230577 ssh_runner.go:195] Run: systemctl --version
	I0407 13:50:51.477081 1230577 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0407 13:50:51.646446 1230577 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0407 13:50:51.652665 1230577 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 13:50:51.652749 1230577 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 13:50:51.670656 1230577 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0407 13:50:51.670685 1230577 start.go:495] detecting cgroup driver to use...
	I0407 13:50:51.670770 1230577 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 13:50:51.690714 1230577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 13:50:51.708136 1230577 docker.go:217] disabling cri-docker service (if available) ...
	I0407 13:50:51.708236 1230577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0407 13:50:51.724771 1230577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0407 13:50:51.742167 1230577 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0407 13:50:51.888167 1230577 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0407 13:50:52.046050 1230577 docker.go:233] disabling docker service ...
	I0407 13:50:52.046143 1230577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0407 13:50:52.064251 1230577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0407 13:50:52.080907 1230577 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0407 13:50:52.237674 1230577 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0407 13:50:52.367460 1230577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0407 13:50:52.381926 1230577 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 13:50:52.401291 1230577 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0407 13:50:52.401354 1230577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:50:52.412491 1230577 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0407 13:50:52.412572 1230577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:50:52.425226 1230577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:50:52.436161 1230577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:50:52.448684 1230577 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 13:50:52.461652 1230577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:50:52.474488 1230577 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:50:52.493283 1230577 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:50:52.504400 1230577 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 13:50:52.514824 1230577 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0407 13:50:52.514917 1230577 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0407 13:50:52.529376 1230577 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 13:50:52.541468 1230577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:50:52.678297 1230577 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0407 13:50:52.792609 1230577 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0407 13:50:52.792702 1230577 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0407 13:50:52.798134 1230577 start.go:563] Will wait 60s for crictl version
	I0407 13:50:52.798199 1230577 ssh_runner.go:195] Run: which crictl
	I0407 13:50:52.803082 1230577 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 13:50:52.859038 1230577 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0407 13:50:52.859147 1230577 ssh_runner.go:195] Run: crio --version
	I0407 13:50:52.891576 1230577 ssh_runner.go:195] Run: crio --version
	I0407 13:50:52.923542 1230577 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0407 13:50:52.925081 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetIP
	I0407 13:50:52.928505 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:52.928967 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:bridge-056871 Clientid:01:52:54:00:d9:a5:38}
	I0407 13:50:52.929009 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:52.929315 1230577 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0407 13:50:52.935087 1230577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:50:52.949529 1230577 kubeadm.go:883] updating cluster {Name:bridge-056871 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-056871 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.60 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 13:50:52.949701 1230577 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 13:50:52.949838 1230577 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 13:50:52.987568 1230577 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0407 13:50:52.987653 1230577 ssh_runner.go:195] Run: which lz4
	I0407 13:50:52.992548 1230577 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0407 13:50:52.998863 1230577 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0407 13:50:52.998913 1230577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0407 13:50:51.189863 1229086 addons.go:514] duration metric: took 1.336491082s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0407 13:50:51.238034 1229086 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-056871" context rescaled to 1 replicas
	I0407 13:50:52.257086 1229086 node_ready.go:53] node "flannel-056871" has status "Ready":"False"
	I0407 13:50:54.756619 1229086 node_ready.go:53] node "flannel-056871" has status "Ready":"False"
	I0407 13:50:54.234617 1220973 pod_ready.go:103] pod "metrics-server-f79f97bbb-m78vh" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:56.732728 1220973 pod_ready.go:103] pod "metrics-server-f79f97bbb-m78vh" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:54.515892 1230577 crio.go:462] duration metric: took 1.523394215s to copy over tarball
	I0407 13:50:54.515995 1230577 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0407 13:50:57.213356 1230577 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.697323774s)
	I0407 13:50:57.213402 1230577 crio.go:469] duration metric: took 2.697477479s to extract the tarball
	I0407 13:50:57.213413 1230577 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0407 13:50:57.255268 1230577 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 13:50:57.318494 1230577 crio.go:514] all images are preloaded for cri-o runtime.
	I0407 13:50:57.318534 1230577 cache_images.go:84] Images are preloaded, skipping loading
	I0407 13:50:57.318547 1230577 kubeadm.go:934] updating node { 192.168.50.60 8443 v1.32.2 crio true true} ...
	I0407 13:50:57.318677 1230577 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-056871 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:bridge-056871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0407 13:50:57.318774 1230577 ssh_runner.go:195] Run: crio config
	I0407 13:50:57.380053 1230577 cni.go:84] Creating CNI manager for "bridge"
	I0407 13:50:57.380079 1230577 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0407 13:50:57.380105 1230577 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.60 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-056871 NodeName:bridge-056871 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.60"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.60 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0407 13:50:57.380278 1230577 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.60
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-056871"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.60"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.60"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 13:50:57.380354 1230577 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0407 13:50:57.392866 1230577 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 13:50:57.392960 1230577 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 13:50:57.406687 1230577 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0407 13:50:57.427880 1230577 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 13:50:57.448616 1230577 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0407 13:50:57.470040 1230577 ssh_runner.go:195] Run: grep 192.168.50.60	control-plane.minikube.internal$ /etc/hosts
	I0407 13:50:57.475302 1230577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.60	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:50:57.491749 1230577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:50:57.639795 1230577 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:50:57.658241 1230577 certs.go:68] Setting up /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871 for IP: 192.168.50.60
	I0407 13:50:57.658290 1230577 certs.go:194] generating shared ca certs ...
	I0407 13:50:57.658317 1230577 certs.go:226] acquiring lock for ca certs: {Name:mk8e89191fca7f2111bdd08c345368f593b0d5d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:50:57.658561 1230577 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.key
	I0407 13:50:57.658619 1230577 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.key
	I0407 13:50:57.658633 1230577 certs.go:256] generating profile certs ...
	I0407 13:50:57.658706 1230577 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/client.key
	I0407 13:50:57.658742 1230577 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/client.crt with IP's: []
	I0407 13:50:57.974616 1230577 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/client.crt ...
	I0407 13:50:57.974653 1230577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/client.crt: {Name:mkc867212f90b2762394f4051a0f0af7353f610d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:50:57.974835 1230577 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/client.key ...
	I0407 13:50:57.974848 1230577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/client.key: {Name:mk05aafef2f5921529a0b513feffd0dc25ca3d50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:50:57.974962 1230577 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/apiserver.key.d9851c72
	I0407 13:50:57.974997 1230577 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/apiserver.crt.d9851c72 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.60]
	I0407 13:50:58.209297 1230577 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/apiserver.crt.d9851c72 ...
	I0407 13:50:58.209334 1230577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/apiserver.crt.d9851c72: {Name:mk98eb2013c8df0dacc23f994053809c81d58a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:50:58.209554 1230577 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/apiserver.key.d9851c72 ...
	I0407 13:50:58.209574 1230577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/apiserver.key.d9851c72: {Name:mkcf9ecf16e30518e911274a1e12ea04551f6078 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:50:58.209682 1230577 certs.go:381] copying /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/apiserver.crt.d9851c72 -> /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/apiserver.crt
	I0407 13:50:58.209815 1230577 certs.go:385] copying /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/apiserver.key.d9851c72 -> /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/apiserver.key
	I0407 13:50:58.209874 1230577 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/proxy-client.key
	I0407 13:50:58.209891 1230577 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/proxy-client.crt with IP's: []
	I0407 13:50:58.795229 1230577 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/proxy-client.crt ...
	I0407 13:50:58.795270 1230577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/proxy-client.crt: {Name:mke5bf8aa5bc8a94a5bfc7724d5b4299874dc779 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:50:58.795464 1230577 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/proxy-client.key ...
	I0407 13:50:58.795479 1230577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/proxy-client.key: {Name:mkcbdcd825378ab7ffd2c1e3905866b5d0bc479d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:50:58.795656 1230577 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716.pem (1338 bytes)
	W0407 13:50:58.795697 1230577 certs.go:480] ignoring /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716_empty.pem, impossibly tiny 0 bytes
	I0407 13:50:58.795707 1230577 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem (1675 bytes)
	I0407 13:50:58.795729 1230577 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem (1078 bytes)
	I0407 13:50:58.795754 1230577 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem (1123 bytes)
	I0407 13:50:58.795777 1230577 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem (1675 bytes)
	I0407 13:50:58.795814 1230577 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem (1708 bytes)
	I0407 13:50:58.796406 1230577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 13:50:58.827453 1230577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 13:50:58.860891 1230577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 13:50:58.891713 1230577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0407 13:50:58.920508 1230577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0407 13:50:58.949761 1230577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0407 13:50:58.979704 1230577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 13:50:59.011376 1230577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0407 13:50:59.041385 1230577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem --> /usr/share/ca-certificates/11697162.pem (1708 bytes)
	I0407 13:50:59.071111 1230577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 13:50:59.099290 1230577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716.pem --> /usr/share/ca-certificates/1169716.pem (1338 bytes)
	I0407 13:50:59.129519 1230577 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 13:50:59.150197 1230577 ssh_runner.go:195] Run: openssl version
	I0407 13:50:59.156990 1230577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 13:50:59.169023 1230577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:50:59.175362 1230577 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:14 /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:50:59.175452 1230577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:50:59.182838 1230577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 13:50:59.196179 1230577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1169716.pem && ln -fs /usr/share/ca-certificates/1169716.pem /etc/ssl/certs/1169716.pem"
	I0407 13:50:59.220374 1230577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1169716.pem
	I0407 13:50:59.225777 1230577 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 12:22 /usr/share/ca-certificates/1169716.pem
	I0407 13:50:59.225851 1230577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1169716.pem
	I0407 13:50:59.232822 1230577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1169716.pem /etc/ssl/certs/51391683.0"
	I0407 13:50:59.251539 1230577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11697162.pem && ln -fs /usr/share/ca-certificates/11697162.pem /etc/ssl/certs/11697162.pem"
	I0407 13:50:59.280935 1230577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11697162.pem
	I0407 13:50:59.289842 1230577 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 12:22 /usr/share/ca-certificates/11697162.pem
	I0407 13:50:59.289940 1230577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11697162.pem
	I0407 13:50:59.299101 1230577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11697162.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 13:50:59.315775 1230577 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 13:50:59.321662 1230577 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0407 13:50:59.321777 1230577 kubeadm.go:392] StartCluster: {Name:bridge-056871 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-056871 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.60 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:50:59.321930 1230577 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0407 13:50:59.322014 1230577 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0407 13:50:59.362423 1230577 cri.go:89] found id: ""
	I0407 13:50:59.362507 1230577 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0407 13:50:59.373224 1230577 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0407 13:50:59.386723 1230577 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 13:50:59.399124 1230577 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 13:50:59.399168 1230577 kubeadm.go:157] found existing configuration files:
	
	I0407 13:50:59.399226 1230577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 13:50:59.411049 1230577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 13:50:59.411127 1230577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 13:50:59.426032 1230577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 13:50:59.438641 1230577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 13:50:59.438728 1230577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 13:50:59.451089 1230577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 13:50:59.463857 1230577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 13:50:59.463974 1230577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 13:50:59.479205 1230577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 13:50:59.493922 1230577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 13:50:59.494003 1230577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 13:50:59.505870 1230577 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0407 13:50:59.564385 1230577 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0407 13:50:59.564482 1230577 kubeadm.go:310] [preflight] Running pre-flight checks
	I0407 13:50:59.695317 1230577 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0407 13:50:59.695454 1230577 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0407 13:50:59.695578 1230577 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0407 13:50:59.706980 1230577 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 13:50:56.757337 1229086 node_ready.go:53] node "flannel-056871" has status "Ready":"False"
	I0407 13:50:59.265295 1229086 node_ready.go:49] node "flannel-056871" has status "Ready":"True"
	I0407 13:50:59.265338 1229086 node_ready.go:38] duration metric: took 9.012025989s for node "flannel-056871" to be "Ready" ...
	I0407 13:50:59.265352 1229086 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 13:50:59.614573 1229086 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-wtbtr" in "kube-system" namespace to be "Ready" ...
	I0407 13:50:59.708816 1230577 out.go:235]   - Generating certificates and keys ...
	I0407 13:50:59.708936 1230577 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0407 13:50:59.709062 1230577 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0407 13:50:59.754958 1230577 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0407 13:51:00.057162 1230577 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0407 13:51:00.247122 1230577 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0407 13:51:00.307908 1230577 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0407 13:51:00.548568 1230577 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0407 13:51:00.548728 1230577 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-056871 localhost] and IPs [192.168.50.60 127.0.0.1 ::1]
	I0407 13:51:00.647617 1230577 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0407 13:51:00.647862 1230577 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-056871 localhost] and IPs [192.168.50.60 127.0.0.1 ::1]
	I0407 13:51:00.792377 1230577 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0407 13:51:00.871086 1230577 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0407 13:51:00.924184 1230577 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0407 13:51:00.924560 1230577 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 13:51:01.350299 1230577 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 13:51:01.615993 1230577 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0407 13:51:01.929649 1230577 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 13:51:02.113360 1230577 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 13:51:02.546006 1230577 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 13:51:02.546752 1230577 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 13:51:02.552370 1230577 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 13:50:58.732838 1220973 pod_ready.go:103] pod "metrics-server-f79f97bbb-m78vh" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:59.232858 1220973 pod_ready.go:82] duration metric: took 4m0.006754984s for pod "metrics-server-f79f97bbb-m78vh" in "kube-system" namespace to be "Ready" ...
	E0407 13:50:59.232890 1220973 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0407 13:50:59.232901 1220973 pod_ready.go:39] duration metric: took 4m5.548332556s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 13:50:59.232938 1220973 api_server.go:52] waiting for apiserver process to appear ...
	I0407 13:50:59.232999 1220973 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:50:59.233061 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:50:59.299596 1220973 cri.go:89] found id: "6cbdc1dbe232daca30cf9798cbe45bc4e4b5484f2e30d667c64446b5286a7548"
	I0407 13:50:59.299625 1220973 cri.go:89] found id: ""
	I0407 13:50:59.299636 1220973 logs.go:282] 1 containers: [6cbdc1dbe232daca30cf9798cbe45bc4e4b5484f2e30d667c64446b5286a7548]
	I0407 13:50:59.299702 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:50:59.305111 1220973 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:50:59.305225 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:50:59.352747 1220973 cri.go:89] found id: "a812790b510c3b709c41019dad30590a5e59858ee6d5580754e4a036c2976bba"
	I0407 13:50:59.352779 1220973 cri.go:89] found id: ""
	I0407 13:50:59.352789 1220973 logs.go:282] 1 containers: [a812790b510c3b709c41019dad30590a5e59858ee6d5580754e4a036c2976bba]
	I0407 13:50:59.352846 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:50:59.357342 1220973 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:50:59.357450 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:50:59.403512 1220973 cri.go:89] found id: "4810859fa0f7ef2d1589ac95411a7ac1507611346d1aaa753426dcc88a4e856e"
	I0407 13:50:59.403544 1220973 cri.go:89] found id: ""
	I0407 13:50:59.403556 1220973 logs.go:282] 1 containers: [4810859fa0f7ef2d1589ac95411a7ac1507611346d1aaa753426dcc88a4e856e]
	I0407 13:50:59.403632 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:50:59.408194 1220973 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:50:59.408287 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:50:59.456348 1220973 cri.go:89] found id: "fb859882ec6a9fd778bf3ed031011efa021fa44fc8fe605ae66cde13666f4e31"
	I0407 13:50:59.456380 1220973 cri.go:89] found id: ""
	I0407 13:50:59.456390 1220973 logs.go:282] 1 containers: [fb859882ec6a9fd778bf3ed031011efa021fa44fc8fe605ae66cde13666f4e31]
	I0407 13:50:59.456459 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:50:59.461952 1220973 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:50:59.462054 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:50:59.521364 1220973 cri.go:89] found id: "32e3e8695b0d8a7b728aab2ac20aacda522f31d2370af79a49fc2b2ad83d48ce"
	I0407 13:50:59.521415 1220973 cri.go:89] found id: ""
	I0407 13:50:59.521424 1220973 logs.go:282] 1 containers: [32e3e8695b0d8a7b728aab2ac20aacda522f31d2370af79a49fc2b2ad83d48ce]
	I0407 13:50:59.521505 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:50:59.527616 1220973 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:50:59.527742 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:50:59.578098 1220973 cri.go:89] found id: "73783df615d51c4dbbdc8b3d7491f98b0db8a0fe47164cbfb337f6d530ef0479"
	I0407 13:50:59.578131 1220973 cri.go:89] found id: ""
	I0407 13:50:59.578141 1220973 logs.go:282] 1 containers: [73783df615d51c4dbbdc8b3d7491f98b0db8a0fe47164cbfb337f6d530ef0479]
	I0407 13:50:59.578211 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:50:59.585662 1220973 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:50:59.585783 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:50:59.641063 1220973 cri.go:89] found id: ""
	I0407 13:50:59.641098 1220973 logs.go:282] 0 containers: []
	W0407 13:50:59.641109 1220973 logs.go:284] No container was found matching "kindnet"
	I0407 13:50:59.641118 1220973 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:50:59.641207 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:50:59.689067 1220973 cri.go:89] found id: "76e9f9bee671aee2fee8265d665ba5af5294dbaad7e0029b08a664f346f9b48a"
	I0407 13:50:59.689112 1220973 cri.go:89] found id: ""
	I0407 13:50:59.689125 1220973 logs.go:282] 1 containers: [76e9f9bee671aee2fee8265d665ba5af5294dbaad7e0029b08a664f346f9b48a]
	I0407 13:50:59.689210 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:50:59.695252 1220973 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0407 13:50:59.695343 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0407 13:50:59.740221 1220973 cri.go:89] found id: "10ed6f8775b27e44b030bfbb966fbdec66d0417583ee1437a63f77d3ad67fac9"
	I0407 13:50:59.740252 1220973 cri.go:89] found id: "1ed6295bdc8301b8f8525a335c839a14438ad10707fce521a1414f1950e8458f"
	I0407 13:50:59.740256 1220973 cri.go:89] found id: ""
	I0407 13:50:59.740270 1220973 logs.go:282] 2 containers: [10ed6f8775b27e44b030bfbb966fbdec66d0417583ee1437a63f77d3ad67fac9 1ed6295bdc8301b8f8525a335c839a14438ad10707fce521a1414f1950e8458f]
	I0407 13:50:59.740348 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:50:59.745389 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:50:59.750517 1220973 logs.go:123] Gathering logs for kube-apiserver [6cbdc1dbe232daca30cf9798cbe45bc4e4b5484f2e30d667c64446b5286a7548] ...
	I0407 13:50:59.750557 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6cbdc1dbe232daca30cf9798cbe45bc4e4b5484f2e30d667c64446b5286a7548"
	I0407 13:50:59.810125 1220973 logs.go:123] Gathering logs for coredns [4810859fa0f7ef2d1589ac95411a7ac1507611346d1aaa753426dcc88a4e856e] ...
	I0407 13:50:59.810182 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4810859fa0f7ef2d1589ac95411a7ac1507611346d1aaa753426dcc88a4e856e"
	I0407 13:50:59.860690 1220973 logs.go:123] Gathering logs for kube-proxy [32e3e8695b0d8a7b728aab2ac20aacda522f31d2370af79a49fc2b2ad83d48ce] ...
	I0407 13:50:59.860741 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32e3e8695b0d8a7b728aab2ac20aacda522f31d2370af79a49fc2b2ad83d48ce"
	I0407 13:50:59.904254 1220973 logs.go:123] Gathering logs for kube-controller-manager [73783df615d51c4dbbdc8b3d7491f98b0db8a0fe47164cbfb337f6d530ef0479] ...
	I0407 13:50:59.904291 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73783df615d51c4dbbdc8b3d7491f98b0db8a0fe47164cbfb337f6d530ef0479"
	I0407 13:50:59.981119 1220973 logs.go:123] Gathering logs for storage-provisioner [1ed6295bdc8301b8f8525a335c839a14438ad10707fce521a1414f1950e8458f] ...
	I0407 13:50:59.981181 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ed6295bdc8301b8f8525a335c839a14438ad10707fce521a1414f1950e8458f"
	I0407 13:51:00.033545 1220973 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:51:00.033589 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:51:00.691879 1220973 logs.go:123] Gathering logs for dmesg ...
	I0407 13:51:00.691971 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:51:00.709206 1220973 logs.go:123] Gathering logs for etcd [a812790b510c3b709c41019dad30590a5e59858ee6d5580754e4a036c2976bba] ...
	I0407 13:51:00.709256 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a812790b510c3b709c41019dad30590a5e59858ee6d5580754e4a036c2976bba"
	I0407 13:51:00.771594 1220973 logs.go:123] Gathering logs for kube-scheduler [fb859882ec6a9fd778bf3ed031011efa021fa44fc8fe605ae66cde13666f4e31] ...
	I0407 13:51:00.771666 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb859882ec6a9fd778bf3ed031011efa021fa44fc8fe605ae66cde13666f4e31"
	I0407 13:51:00.821357 1220973 logs.go:123] Gathering logs for kubernetes-dashboard [76e9f9bee671aee2fee8265d665ba5af5294dbaad7e0029b08a664f346f9b48a] ...
	I0407 13:51:00.821404 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76e9f9bee671aee2fee8265d665ba5af5294dbaad7e0029b08a664f346f9b48a"
	I0407 13:51:00.865005 1220973 logs.go:123] Gathering logs for storage-provisioner [10ed6f8775b27e44b030bfbb966fbdec66d0417583ee1437a63f77d3ad67fac9] ...
	I0407 13:51:00.865053 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10ed6f8775b27e44b030bfbb966fbdec66d0417583ee1437a63f77d3ad67fac9"
	I0407 13:51:00.904894 1220973 logs.go:123] Gathering logs for container status ...
	I0407 13:51:00.904934 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:51:00.968005 1220973 logs.go:123] Gathering logs for kubelet ...
	I0407 13:51:00.968072 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:51:01.067832 1220973 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:51:01.067896 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0407 13:51:02.554852 1230577 out.go:235]   - Booting up control plane ...
	I0407 13:51:02.555003 1230577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 13:51:02.555092 1230577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 13:51:02.555231 1230577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 13:51:02.575773 1230577 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 13:51:02.585901 1230577 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 13:51:02.586018 1230577 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0407 13:51:02.742270 1230577 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0407 13:51:02.742452 1230577 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0407 13:51:03.243655 1230577 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.087964ms
	I0407 13:51:03.243741 1230577 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0407 13:51:01.622566 1229086 pod_ready.go:103] pod "coredns-668d6bf9bc-wtbtr" in "kube-system" namespace has status "Ready":"False"
	I0407 13:51:04.124574 1229086 pod_ready.go:103] pod "coredns-668d6bf9bc-wtbtr" in "kube-system" namespace has status "Ready":"False"
	I0407 13:51:03.750152 1220973 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:51:03.767770 1220973 api_server.go:72] duration metric: took 4m17.405802625s to wait for apiserver process to appear ...
	I0407 13:51:03.767806 1220973 api_server.go:88] waiting for apiserver healthz status ...
	I0407 13:51:03.767861 1220973 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:51:03.767930 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:51:03.818498 1220973 cri.go:89] found id: "6cbdc1dbe232daca30cf9798cbe45bc4e4b5484f2e30d667c64446b5286a7548"
	I0407 13:51:03.818536 1220973 cri.go:89] found id: ""
	I0407 13:51:03.818548 1220973 logs.go:282] 1 containers: [6cbdc1dbe232daca30cf9798cbe45bc4e4b5484f2e30d667c64446b5286a7548]
	I0407 13:51:03.818627 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:51:03.823650 1220973 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:51:03.823760 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:51:03.875522 1220973 cri.go:89] found id: "a812790b510c3b709c41019dad30590a5e59858ee6d5580754e4a036c2976bba"
	I0407 13:51:03.875682 1220973 cri.go:89] found id: ""
	I0407 13:51:03.875708 1220973 logs.go:282] 1 containers: [a812790b510c3b709c41019dad30590a5e59858ee6d5580754e4a036c2976bba]
	I0407 13:51:03.875823 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:51:03.881759 1220973 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:51:03.881872 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:51:03.934057 1220973 cri.go:89] found id: "4810859fa0f7ef2d1589ac95411a7ac1507611346d1aaa753426dcc88a4e856e"
	I0407 13:51:03.934089 1220973 cri.go:89] found id: ""
	I0407 13:51:03.934100 1220973 logs.go:282] 1 containers: [4810859fa0f7ef2d1589ac95411a7ac1507611346d1aaa753426dcc88a4e856e]
	I0407 13:51:03.934167 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:51:03.941166 1220973 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:51:03.941286 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:51:04.007594 1220973 cri.go:89] found id: "fb859882ec6a9fd778bf3ed031011efa021fa44fc8fe605ae66cde13666f4e31"
	I0407 13:51:04.007635 1220973 cri.go:89] found id: ""
	I0407 13:51:04.007647 1220973 logs.go:282] 1 containers: [fb859882ec6a9fd778bf3ed031011efa021fa44fc8fe605ae66cde13666f4e31]
	I0407 13:51:04.007730 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:51:04.013908 1220973 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:51:04.014034 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:51:04.081983 1220973 cri.go:89] found id: "32e3e8695b0d8a7b728aab2ac20aacda522f31d2370af79a49fc2b2ad83d48ce"
	I0407 13:51:04.082043 1220973 cri.go:89] found id: ""
	I0407 13:51:04.082062 1220973 logs.go:282] 1 containers: [32e3e8695b0d8a7b728aab2ac20aacda522f31d2370af79a49fc2b2ad83d48ce]
	I0407 13:51:04.082162 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:51:04.088227 1220973 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:51:04.088493 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:51:04.134694 1220973 cri.go:89] found id: "73783df615d51c4dbbdc8b3d7491f98b0db8a0fe47164cbfb337f6d530ef0479"
	I0407 13:51:04.134730 1220973 cri.go:89] found id: ""
	I0407 13:51:04.134744 1220973 logs.go:282] 1 containers: [73783df615d51c4dbbdc8b3d7491f98b0db8a0fe47164cbfb337f6d530ef0479]
	I0407 13:51:04.134818 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:51:04.140372 1220973 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:51:04.140465 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:51:04.183295 1220973 cri.go:89] found id: ""
	I0407 13:51:04.183336 1220973 logs.go:282] 0 containers: []
	W0407 13:51:04.183347 1220973 logs.go:284] No container was found matching "kindnet"
	I0407 13:51:04.183355 1220973 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:51:04.183426 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:51:04.231005 1220973 cri.go:89] found id: "76e9f9bee671aee2fee8265d665ba5af5294dbaad7e0029b08a664f346f9b48a"
	I0407 13:51:04.231046 1220973 cri.go:89] found id: ""
	I0407 13:51:04.231058 1220973 logs.go:282] 1 containers: [76e9f9bee671aee2fee8265d665ba5af5294dbaad7e0029b08a664f346f9b48a]
	I0407 13:51:04.231145 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:51:04.237741 1220973 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0407 13:51:04.237843 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0407 13:51:04.288156 1220973 cri.go:89] found id: "10ed6f8775b27e44b030bfbb966fbdec66d0417583ee1437a63f77d3ad67fac9"
	I0407 13:51:04.288193 1220973 cri.go:89] found id: "1ed6295bdc8301b8f8525a335c839a14438ad10707fce521a1414f1950e8458f"
	I0407 13:51:04.288198 1220973 cri.go:89] found id: ""
	I0407 13:51:04.288209 1220973 logs.go:282] 2 containers: [10ed6f8775b27e44b030bfbb966fbdec66d0417583ee1437a63f77d3ad67fac9 1ed6295bdc8301b8f8525a335c839a14438ad10707fce521a1414f1950e8458f]
	I0407 13:51:04.288293 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:51:04.293482 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:51:04.299121 1220973 logs.go:123] Gathering logs for kube-scheduler [fb859882ec6a9fd778bf3ed031011efa021fa44fc8fe605ae66cde13666f4e31] ...
	I0407 13:51:04.299170 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb859882ec6a9fd778bf3ed031011efa021fa44fc8fe605ae66cde13666f4e31"
	I0407 13:51:04.341860 1220973 logs.go:123] Gathering logs for kube-proxy [32e3e8695b0d8a7b728aab2ac20aacda522f31d2370af79a49fc2b2ad83d48ce] ...
	I0407 13:51:04.341899 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32e3e8695b0d8a7b728aab2ac20aacda522f31d2370af79a49fc2b2ad83d48ce"
	I0407 13:51:04.399464 1220973 logs.go:123] Gathering logs for storage-provisioner [10ed6f8775b27e44b030bfbb966fbdec66d0417583ee1437a63f77d3ad67fac9] ...
	I0407 13:51:04.399522 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10ed6f8775b27e44b030bfbb966fbdec66d0417583ee1437a63f77d3ad67fac9"
	I0407 13:51:04.450476 1220973 logs.go:123] Gathering logs for storage-provisioner [1ed6295bdc8301b8f8525a335c839a14438ad10707fce521a1414f1950e8458f] ...
	I0407 13:51:04.450532 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ed6295bdc8301b8f8525a335c839a14438ad10707fce521a1414f1950e8458f"
	I0407 13:51:04.496635 1220973 logs.go:123] Gathering logs for kubelet ...
	I0407 13:51:04.496675 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:51:04.599935 1220973 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:51:04.599980 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0407 13:51:04.736164 1220973 logs.go:123] Gathering logs for kube-controller-manager [73783df615d51c4dbbdc8b3d7491f98b0db8a0fe47164cbfb337f6d530ef0479] ...
	I0407 13:51:04.736213 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73783df615d51c4dbbdc8b3d7491f98b0db8a0fe47164cbfb337f6d530ef0479"
	I0407 13:51:04.801457 1220973 logs.go:123] Gathering logs for kubernetes-dashboard [76e9f9bee671aee2fee8265d665ba5af5294dbaad7e0029b08a664f346f9b48a] ...
	I0407 13:51:04.801522 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76e9f9bee671aee2fee8265d665ba5af5294dbaad7e0029b08a664f346f9b48a"
	I0407 13:51:04.851783 1220973 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:51:04.851841 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:51:05.386797 1220973 logs.go:123] Gathering logs for container status ...
	I0407 13:51:05.386851 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:51:05.446579 1220973 logs.go:123] Gathering logs for dmesg ...
	I0407 13:51:05.446640 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:51:05.468506 1220973 logs.go:123] Gathering logs for kube-apiserver [6cbdc1dbe232daca30cf9798cbe45bc4e4b5484f2e30d667c64446b5286a7548] ...
	I0407 13:51:05.468564 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6cbdc1dbe232daca30cf9798cbe45bc4e4b5484f2e30d667c64446b5286a7548"
	I0407 13:51:05.529064 1220973 logs.go:123] Gathering logs for etcd [a812790b510c3b709c41019dad30590a5e59858ee6d5580754e4a036c2976bba] ...
	I0407 13:51:05.529121 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a812790b510c3b709c41019dad30590a5e59858ee6d5580754e4a036c2976bba"
	I0407 13:51:05.589325 1220973 logs.go:123] Gathering logs for coredns [4810859fa0f7ef2d1589ac95411a7ac1507611346d1aaa753426dcc88a4e856e] ...
	I0407 13:51:05.589379 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4810859fa0f7ef2d1589ac95411a7ac1507611346d1aaa753426dcc88a4e856e"
	I0407 13:51:08.744835 1230577 kubeadm.go:310] [api-check] The API server is healthy after 5.502339468s
	I0407 13:51:08.765064 1230577 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0407 13:51:08.786774 1230577 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0407 13:51:08.845055 1230577 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0407 13:51:08.845300 1230577 kubeadm.go:310] [mark-control-plane] Marking the node bridge-056871 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0407 13:51:08.870492 1230577 kubeadm.go:310] [bootstrap-token] Using token: q192q8.5uqppii6wweemeid
	I0407 13:51:08.872629 1230577 out.go:235]   - Configuring RBAC rules ...
	I0407 13:51:08.872809 1230577 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0407 13:51:08.889428 1230577 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0407 13:51:08.906087 1230577 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0407 13:51:08.920396 1230577 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0407 13:51:08.932368 1230577 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0407 13:51:08.940930 1230577 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0407 13:51:09.157342 1230577 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0407 13:51:09.608864 1230577 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0407 13:51:10.152069 1230577 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0407 13:51:10.152099 1230577 kubeadm.go:310] 
	I0407 13:51:10.152202 1230577 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0407 13:51:10.152213 1230577 kubeadm.go:310] 
	I0407 13:51:10.152334 1230577 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0407 13:51:10.152343 1230577 kubeadm.go:310] 
	I0407 13:51:10.152379 1230577 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0407 13:51:10.152469 1230577 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0407 13:51:10.152528 1230577 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0407 13:51:10.152535 1230577 kubeadm.go:310] 
	I0407 13:51:10.152581 1230577 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0407 13:51:10.152587 1230577 kubeadm.go:310] 
	I0407 13:51:10.152640 1230577 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0407 13:51:10.152647 1230577 kubeadm.go:310] 
	I0407 13:51:10.152691 1230577 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0407 13:51:10.152775 1230577 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0407 13:51:10.152867 1230577 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0407 13:51:10.152877 1230577 kubeadm.go:310] 
	I0407 13:51:10.152972 1230577 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0407 13:51:10.153231 1230577 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0407 13:51:10.153316 1230577 kubeadm.go:310] 
	I0407 13:51:10.153526 1230577 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token q192q8.5uqppii6wweemeid \
	I0407 13:51:10.153665 1230577 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:001387253bb6e222db2af12e9fcbe5a1c3ee2a6f53970e58b5a7d017a3fc6618 \
	I0407 13:51:10.153696 1230577 kubeadm.go:310] 	--control-plane 
	I0407 13:51:10.153715 1230577 kubeadm.go:310] 
	I0407 13:51:10.153851 1230577 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0407 13:51:10.153900 1230577 kubeadm.go:310] 
	I0407 13:51:10.154039 1230577 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token q192q8.5uqppii6wweemeid \
	I0407 13:51:10.154310 1230577 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:001387253bb6e222db2af12e9fcbe5a1c3ee2a6f53970e58b5a7d017a3fc6618 
	I0407 13:51:10.154481 1230577 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0407 13:51:10.154507 1230577 cni.go:84] Creating CNI manager for "bridge"
	I0407 13:51:10.156948 1230577 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0407 13:51:06.623742 1229086 pod_ready.go:103] pod "coredns-668d6bf9bc-wtbtr" in "kube-system" namespace has status "Ready":"False"
	I0407 13:51:09.124641 1229086 pod_ready.go:103] pod "coredns-668d6bf9bc-wtbtr" in "kube-system" namespace has status "Ready":"False"
	I0407 13:51:08.147478 1220973 api_server.go:253] Checking apiserver healthz at https://192.168.72.39:8444/healthz ...
	I0407 13:51:08.156708 1220973 api_server.go:279] https://192.168.72.39:8444/healthz returned 200:
	ok
	I0407 13:51:08.158021 1220973 api_server.go:141] control plane version: v1.32.2
	I0407 13:51:08.158052 1220973 api_server.go:131] duration metric: took 4.390237602s to wait for apiserver health ...
	I0407 13:51:08.158064 1220973 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 13:51:08.158093 1220973 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:51:08.158144 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:51:08.201978 1220973 cri.go:89] found id: "6cbdc1dbe232daca30cf9798cbe45bc4e4b5484f2e30d667c64446b5286a7548"
	I0407 13:51:08.202009 1220973 cri.go:89] found id: ""
	I0407 13:51:08.202021 1220973 logs.go:282] 1 containers: [6cbdc1dbe232daca30cf9798cbe45bc4e4b5484f2e30d667c64446b5286a7548]
	I0407 13:51:08.202088 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:51:08.206567 1220973 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:51:08.206658 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:51:08.266741 1220973 cri.go:89] found id: "a812790b510c3b709c41019dad30590a5e59858ee6d5580754e4a036c2976bba"
	I0407 13:51:08.266771 1220973 cri.go:89] found id: ""
	I0407 13:51:08.266782 1220973 logs.go:282] 1 containers: [a812790b510c3b709c41019dad30590a5e59858ee6d5580754e4a036c2976bba]
	I0407 13:51:08.266853 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:51:08.271249 1220973 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:51:08.271321 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:51:08.310236 1220973 cri.go:89] found id: "4810859fa0f7ef2d1589ac95411a7ac1507611346d1aaa753426dcc88a4e856e"
	I0407 13:51:08.310270 1220973 cri.go:89] found id: ""
	I0407 13:51:08.310279 1220973 logs.go:282] 1 containers: [4810859fa0f7ef2d1589ac95411a7ac1507611346d1aaa753426dcc88a4e856e]
	I0407 13:51:08.310331 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:51:08.314760 1220973 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:51:08.314857 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:51:08.358924 1220973 cri.go:89] found id: "fb859882ec6a9fd778bf3ed031011efa021fa44fc8fe605ae66cde13666f4e31"
	I0407 13:51:08.358959 1220973 cri.go:89] found id: ""
	I0407 13:51:08.358970 1220973 logs.go:282] 1 containers: [fb859882ec6a9fd778bf3ed031011efa021fa44fc8fe605ae66cde13666f4e31]
	I0407 13:51:08.359049 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:51:08.363412 1220973 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:51:08.363502 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:51:08.401615 1220973 cri.go:89] found id: "32e3e8695b0d8a7b728aab2ac20aacda522f31d2370af79a49fc2b2ad83d48ce"
	I0407 13:51:08.401643 1220973 cri.go:89] found id: ""
	I0407 13:51:08.401653 1220973 logs.go:282] 1 containers: [32e3e8695b0d8a7b728aab2ac20aacda522f31d2370af79a49fc2b2ad83d48ce]
	I0407 13:51:08.401733 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:51:08.407568 1220973 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:51:08.407681 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:51:08.450987 1220973 cri.go:89] found id: "73783df615d51c4dbbdc8b3d7491f98b0db8a0fe47164cbfb337f6d530ef0479"
	I0407 13:51:08.451037 1220973 cri.go:89] found id: ""
	I0407 13:51:08.451072 1220973 logs.go:282] 1 containers: [73783df615d51c4dbbdc8b3d7491f98b0db8a0fe47164cbfb337f6d530ef0479]
	I0407 13:51:08.451144 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:51:08.455919 1220973 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:51:08.456033 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:51:08.494960 1220973 cri.go:89] found id: ""
	I0407 13:51:08.495003 1220973 logs.go:282] 0 containers: []
	W0407 13:51:08.495017 1220973 logs.go:284] No container was found matching "kindnet"
	I0407 13:51:08.495025 1220973 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0407 13:51:08.495106 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0407 13:51:08.543463 1220973 cri.go:89] found id: "10ed6f8775b27e44b030bfbb966fbdec66d0417583ee1437a63f77d3ad67fac9"
	I0407 13:51:08.543488 1220973 cri.go:89] found id: "1ed6295bdc8301b8f8525a335c839a14438ad10707fce521a1414f1950e8458f"
	I0407 13:51:08.543493 1220973 cri.go:89] found id: ""
	I0407 13:51:08.543519 1220973 logs.go:282] 2 containers: [10ed6f8775b27e44b030bfbb966fbdec66d0417583ee1437a63f77d3ad67fac9 1ed6295bdc8301b8f8525a335c839a14438ad10707fce521a1414f1950e8458f]
	I0407 13:51:08.543572 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:51:08.548346 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:51:08.552343 1220973 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:51:08.552415 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:51:08.592306 1220973 cri.go:89] found id: "76e9f9bee671aee2fee8265d665ba5af5294dbaad7e0029b08a664f346f9b48a"
	I0407 13:51:08.592342 1220973 cri.go:89] found id: ""
	I0407 13:51:08.592354 1220973 logs.go:282] 1 containers: [76e9f9bee671aee2fee8265d665ba5af5294dbaad7e0029b08a664f346f9b48a]
	I0407 13:51:08.592427 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:51:08.596797 1220973 logs.go:123] Gathering logs for dmesg ...
	I0407 13:51:08.596825 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:51:08.611785 1220973 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:51:08.611816 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0407 13:51:08.722127 1220973 logs.go:123] Gathering logs for etcd [a812790b510c3b709c41019dad30590a5e59858ee6d5580754e4a036c2976bba] ...
	I0407 13:51:08.722186 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a812790b510c3b709c41019dad30590a5e59858ee6d5580754e4a036c2976bba"
	I0407 13:51:08.784857 1220973 logs.go:123] Gathering logs for coredns [4810859fa0f7ef2d1589ac95411a7ac1507611346d1aaa753426dcc88a4e856e] ...
	I0407 13:51:08.784904 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4810859fa0f7ef2d1589ac95411a7ac1507611346d1aaa753426dcc88a4e856e"
	I0407 13:51:08.823331 1220973 logs.go:123] Gathering logs for kube-scheduler [fb859882ec6a9fd778bf3ed031011efa021fa44fc8fe605ae66cde13666f4e31] ...
	I0407 13:51:08.823361 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb859882ec6a9fd778bf3ed031011efa021fa44fc8fe605ae66cde13666f4e31"
	I0407 13:51:08.861421 1220973 logs.go:123] Gathering logs for kube-proxy [32e3e8695b0d8a7b728aab2ac20aacda522f31d2370af79a49fc2b2ad83d48ce] ...
	I0407 13:51:08.861457 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32e3e8695b0d8a7b728aab2ac20aacda522f31d2370af79a49fc2b2ad83d48ce"
	I0407 13:51:08.923758 1220973 logs.go:123] Gathering logs for storage-provisioner [10ed6f8775b27e44b030bfbb966fbdec66d0417583ee1437a63f77d3ad67fac9] ...
	I0407 13:51:08.923805 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10ed6f8775b27e44b030bfbb966fbdec66d0417583ee1437a63f77d3ad67fac9"
	I0407 13:51:08.978649 1220973 logs.go:123] Gathering logs for kubernetes-dashboard [76e9f9bee671aee2fee8265d665ba5af5294dbaad7e0029b08a664f346f9b48a] ...
	I0407 13:51:08.978702 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76e9f9bee671aee2fee8265d665ba5af5294dbaad7e0029b08a664f346f9b48a"
	I0407 13:51:09.028285 1220973 logs.go:123] Gathering logs for kubelet ...
	I0407 13:51:09.028327 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:51:09.130281 1220973 logs.go:123] Gathering logs for kube-apiserver [6cbdc1dbe232daca30cf9798cbe45bc4e4b5484f2e30d667c64446b5286a7548] ...
	I0407 13:51:09.130329 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6cbdc1dbe232daca30cf9798cbe45bc4e4b5484f2e30d667c64446b5286a7548"
	I0407 13:51:09.203999 1220973 logs.go:123] Gathering logs for kube-controller-manager [73783df615d51c4dbbdc8b3d7491f98b0db8a0fe47164cbfb337f6d530ef0479] ...
	I0407 13:51:09.204060 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73783df615d51c4dbbdc8b3d7491f98b0db8a0fe47164cbfb337f6d530ef0479"
	I0407 13:51:09.280859 1220973 logs.go:123] Gathering logs for storage-provisioner [1ed6295bdc8301b8f8525a335c839a14438ad10707fce521a1414f1950e8458f] ...
	I0407 13:51:09.280917 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ed6295bdc8301b8f8525a335c839a14438ad10707fce521a1414f1950e8458f"
	I0407 13:51:09.329964 1220973 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:51:09.329999 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:51:09.777851 1220973 logs.go:123] Gathering logs for container status ...
	I0407 13:51:09.777933 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:51:12.329139 1220973 system_pods.go:59] 8 kube-system pods found
	I0407 13:51:12.329192 1220973 system_pods.go:61] "coredns-668d6bf9bc-l8dqs" [d22da438-7207-4ea5-886e-4877202a0503] Running
	I0407 13:51:12.329198 1220973 system_pods.go:61] "etcd-default-k8s-diff-port-405061" [616d0285-308b-4f87-a840-2d6c4aafa12b] Running
	I0407 13:51:12.329204 1220973 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-405061" [2bccbc06-ecc1-4a5c-80b4-1b1287cad2a8] Running
	I0407 13:51:12.329209 1220973 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-405061" [f6ef48bd-c717-4a62-90b5-2ba0d395dc23] Running
	I0407 13:51:12.329213 1220973 system_pods.go:61] "kube-proxy-59k7q" [fd139676-0ec9-4996-8f72-b2cc18db7c58] Running
	I0407 13:51:12.329217 1220973 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-405061" [7691ca99-87e9-4a20-8e8a-ad956b63c8f1] Running
	I0407 13:51:12.329223 1220973 system_pods.go:61] "metrics-server-f79f97bbb-m78vh" [29d4eed6-dbb9-4a42-a4ed-644adfc6c32e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0407 13:51:12.329228 1220973 system_pods.go:61] "storage-provisioner" [81745e26-f62c-431f-a4ed-8919d519705f] Running
	I0407 13:51:12.329249 1220973 system_pods.go:74] duration metric: took 4.171177863s to wait for pod list to return data ...
	I0407 13:51:12.329258 1220973 default_sa.go:34] waiting for default service account to be created ...
	I0407 13:51:12.334358 1220973 default_sa.go:45] found service account: "default"
	I0407 13:51:12.334389 1220973 default_sa.go:55] duration metric: took 5.124791ms for default service account to be created ...
	I0407 13:51:12.334400 1220973 system_pods.go:116] waiting for k8s-apps to be running ...
	I0407 13:51:12.338677 1220973 system_pods.go:86] 8 kube-system pods found
	I0407 13:51:12.338728 1220973 system_pods.go:89] "coredns-668d6bf9bc-l8dqs" [d22da438-7207-4ea5-886e-4877202a0503] Running
	I0407 13:51:12.338736 1220973 system_pods.go:89] "etcd-default-k8s-diff-port-405061" [616d0285-308b-4f87-a840-2d6c4aafa12b] Running
	I0407 13:51:12.338741 1220973 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-405061" [2bccbc06-ecc1-4a5c-80b4-1b1287cad2a8] Running
	I0407 13:51:12.338747 1220973 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-405061" [f6ef48bd-c717-4a62-90b5-2ba0d395dc23] Running
	I0407 13:51:12.338751 1220973 system_pods.go:89] "kube-proxy-59k7q" [fd139676-0ec9-4996-8f72-b2cc18db7c58] Running
	I0407 13:51:12.338756 1220973 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-405061" [7691ca99-87e9-4a20-8e8a-ad956b63c8f1] Running
	I0407 13:51:12.338765 1220973 system_pods.go:89] "metrics-server-f79f97bbb-m78vh" [29d4eed6-dbb9-4a42-a4ed-644adfc6c32e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0407 13:51:12.338770 1220973 system_pods.go:89] "storage-provisioner" [81745e26-f62c-431f-a4ed-8919d519705f] Running
	I0407 13:51:12.338782 1220973 system_pods.go:126] duration metric: took 4.37545ms to wait for k8s-apps to be running ...
	I0407 13:51:12.338792 1220973 system_svc.go:44] waiting for kubelet service to be running ....
	I0407 13:51:12.338848 1220973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:51:12.359474 1220973 system_svc.go:56] duration metric: took 20.668679ms WaitForService to wait for kubelet
	I0407 13:51:12.359522 1220973 kubeadm.go:582] duration metric: took 4m25.997559577s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 13:51:12.359551 1220973 node_conditions.go:102] verifying NodePressure condition ...
	I0407 13:51:12.363017 1220973 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 13:51:12.363064 1220973 node_conditions.go:123] node cpu capacity is 2
	I0407 13:51:12.363082 1220973 node_conditions.go:105] duration metric: took 3.524897ms to run NodePressure ...
	I0407 13:51:12.363101 1220973 start.go:241] waiting for startup goroutines ...
	I0407 13:51:12.363118 1220973 start.go:246] waiting for cluster config update ...
	I0407 13:51:12.363136 1220973 start.go:255] writing updated cluster config ...
	I0407 13:51:12.363481 1220973 ssh_runner.go:195] Run: rm -f paused
	I0407 13:51:12.432521 1220973 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0407 13:51:12.436018 1220973 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-405061" cluster and "default" namespace by default
	I0407 13:51:11.623233 1229086 pod_ready.go:103] pod "coredns-668d6bf9bc-wtbtr" in "kube-system" namespace has status "Ready":"False"
	I0407 13:51:12.620924 1229086 pod_ready.go:93] pod "coredns-668d6bf9bc-wtbtr" in "kube-system" namespace has status "Ready":"True"
	I0407 13:51:12.620966 1229086 pod_ready.go:82] duration metric: took 13.006345622s for pod "coredns-668d6bf9bc-wtbtr" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:12.620982 1229086 pod_ready.go:79] waiting up to 15m0s for pod "etcd-flannel-056871" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:12.626794 1229086 pod_ready.go:93] pod "etcd-flannel-056871" in "kube-system" namespace has status "Ready":"True"
	I0407 13:51:12.626826 1229086 pod_ready.go:82] duration metric: took 5.835446ms for pod "etcd-flannel-056871" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:12.626842 1229086 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-flannel-056871" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:12.631694 1229086 pod_ready.go:93] pod "kube-apiserver-flannel-056871" in "kube-system" namespace has status "Ready":"True"
	I0407 13:51:12.631725 1229086 pod_ready.go:82] duration metric: took 4.874755ms for pod "kube-apiserver-flannel-056871" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:12.631742 1229086 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-flannel-056871" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:12.636423 1229086 pod_ready.go:93] pod "kube-controller-manager-flannel-056871" in "kube-system" namespace has status "Ready":"True"
	I0407 13:51:12.636454 1229086 pod_ready.go:82] duration metric: took 4.705104ms for pod "kube-controller-manager-flannel-056871" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:12.636468 1229086 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-smtjx" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:12.641084 1229086 pod_ready.go:93] pod "kube-proxy-smtjx" in "kube-system" namespace has status "Ready":"True"
	I0407 13:51:12.641117 1229086 pod_ready.go:82] duration metric: took 4.640592ms for pod "kube-proxy-smtjx" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:12.641134 1229086 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-flannel-056871" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:13.019186 1229086 pod_ready.go:93] pod "kube-scheduler-flannel-056871" in "kube-system" namespace has status "Ready":"True"
	I0407 13:51:13.019227 1229086 pod_ready.go:82] duration metric: took 378.081871ms for pod "kube-scheduler-flannel-056871" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:13.019245 1229086 pod_ready.go:39] duration metric: took 13.753874082s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 13:51:13.019269 1229086 api_server.go:52] waiting for apiserver process to appear ...
	I0407 13:51:13.019345 1229086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:51:13.036816 1229086 api_server.go:72] duration metric: took 23.18356003s to wait for apiserver process to appear ...
	I0407 13:51:13.036852 1229086 api_server.go:88] waiting for apiserver healthz status ...
	I0407 13:51:13.036873 1229086 api_server.go:253] Checking apiserver healthz at https://192.168.61.247:8443/healthz ...
	I0407 13:51:13.043471 1229086 api_server.go:279] https://192.168.61.247:8443/healthz returned 200:
	ok
	I0407 13:51:13.044789 1229086 api_server.go:141] control plane version: v1.32.2
	I0407 13:51:13.044824 1229086 api_server.go:131] duration metric: took 7.963841ms to wait for apiserver health ...
	I0407 13:51:13.044837 1229086 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 13:51:13.221807 1229086 system_pods.go:59] 7 kube-system pods found
	I0407 13:51:13.221847 1229086 system_pods.go:61] "coredns-668d6bf9bc-wtbtr" [b2923c78-dfb4-45f4-8d9c-c704efa16770] Running
	I0407 13:51:13.221852 1229086 system_pods.go:61] "etcd-flannel-056871" [a483a329-3322-45e9-a8d2-71767ab99f59] Running
	I0407 13:51:13.221856 1229086 system_pods.go:61] "kube-apiserver-flannel-056871" [f3043dae-0f67-4698-be96-b24b62b28437] Running
	I0407 13:51:13.221861 1229086 system_pods.go:61] "kube-controller-manager-flannel-056871" [ac2429e7-b9c0-4ef6-b9a2-d6213321fed6] Running
	I0407 13:51:13.221866 1229086 system_pods.go:61] "kube-proxy-smtjx" [7a3177c3-d1cd-45b3-ae8a-fc2046381c19] Running
	I0407 13:51:13.221871 1229086 system_pods.go:61] "kube-scheduler-flannel-056871" [ef4e2bc1-8a80-41ea-b563-3755728b1363] Running
	I0407 13:51:13.221877 1229086 system_pods.go:61] "storage-provisioner" [1e8fc621-4ec4-4579-bc5e-f59b83a0394d] Running
	I0407 13:51:13.221885 1229086 system_pods.go:74] duration metric: took 177.040439ms to wait for pod list to return data ...
	I0407 13:51:13.221896 1229086 default_sa.go:34] waiting for default service account to be created ...
	I0407 13:51:13.421264 1229086 default_sa.go:45] found service account: "default"
	I0407 13:51:13.421310 1229086 default_sa.go:55] duration metric: took 199.406683ms for default service account to be created ...
	I0407 13:51:13.421325 1229086 system_pods.go:116] waiting for k8s-apps to be running ...
	I0407 13:51:13.619341 1229086 system_pods.go:86] 7 kube-system pods found
	I0407 13:51:13.619373 1229086 system_pods.go:89] "coredns-668d6bf9bc-wtbtr" [b2923c78-dfb4-45f4-8d9c-c704efa16770] Running
	I0407 13:51:13.619380 1229086 system_pods.go:89] "etcd-flannel-056871" [a483a329-3322-45e9-a8d2-71767ab99f59] Running
	I0407 13:51:13.619383 1229086 system_pods.go:89] "kube-apiserver-flannel-056871" [f3043dae-0f67-4698-be96-b24b62b28437] Running
	I0407 13:51:13.619388 1229086 system_pods.go:89] "kube-controller-manager-flannel-056871" [ac2429e7-b9c0-4ef6-b9a2-d6213321fed6] Running
	I0407 13:51:13.619393 1229086 system_pods.go:89] "kube-proxy-smtjx" [7a3177c3-d1cd-45b3-ae8a-fc2046381c19] Running
	I0407 13:51:13.619397 1229086 system_pods.go:89] "kube-scheduler-flannel-056871" [ef4e2bc1-8a80-41ea-b563-3755728b1363] Running
	I0407 13:51:13.619402 1229086 system_pods.go:89] "storage-provisioner" [1e8fc621-4ec4-4579-bc5e-f59b83a0394d] Running
	I0407 13:51:13.619410 1229086 system_pods.go:126] duration metric: took 198.077767ms to wait for k8s-apps to be running ...
	I0407 13:51:13.619419 1229086 system_svc.go:44] waiting for kubelet service to be running ....
	I0407 13:51:13.619467 1229086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:51:13.639988 1229086 system_svc.go:56] duration metric: took 20.552018ms WaitForService to wait for kubelet
	I0407 13:51:13.640033 1229086 kubeadm.go:582] duration metric: took 23.786782845s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 13:51:13.640057 1229086 node_conditions.go:102] verifying NodePressure condition ...
	I0407 13:51:13.819686 1229086 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 13:51:13.819725 1229086 node_conditions.go:123] node cpu capacity is 2
	I0407 13:51:13.819743 1229086 node_conditions.go:105] duration metric: took 179.679142ms to run NodePressure ...
	I0407 13:51:13.819760 1229086 start.go:241] waiting for startup goroutines ...
	I0407 13:51:13.819768 1229086 start.go:246] waiting for cluster config update ...
	I0407 13:51:13.819782 1229086 start.go:255] writing updated cluster config ...
	I0407 13:51:13.820097 1229086 ssh_runner.go:195] Run: rm -f paused
	I0407 13:51:13.884127 1229086 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0407 13:51:13.887015 1229086 out.go:177] * Done! kubectl is now configured to use "flannel-056871" cluster and "default" namespace by default
	I0407 13:51:10.159135 1230577 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0407 13:51:10.170594 1230577 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0407 13:51:10.193430 1230577 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0407 13:51:10.193528 1230577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:51:10.193556 1230577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-056871 minikube.k8s.io/updated_at=2025_04_07T13_51_10_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=33e6edc58d2014d70e908473920ef4ac8eae1e43 minikube.k8s.io/name=bridge-056871 minikube.k8s.io/primary=true
	I0407 13:51:10.213523 1230577 ops.go:34] apiserver oom_adj: -16
	I0407 13:51:10.404665 1230577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:51:10.905503 1230577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:51:11.405109 1230577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:51:11.905139 1230577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:51:12.404791 1230577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:51:12.905795 1230577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:51:13.405662 1230577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:51:13.904970 1230577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:51:14.405510 1230577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:51:14.503334 1230577 kubeadm.go:1113] duration metric: took 4.309889147s to wait for elevateKubeSystemPrivileges
	I0407 13:51:14.503378 1230577 kubeadm.go:394] duration metric: took 15.181607716s to StartCluster
	I0407 13:51:14.503406 1230577 settings.go:142] acquiring lock: {Name:mk19c4dc5d7992642f3fe5ca0bdb3ea65af01b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:51:14.503500 1230577 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20602-1162386/kubeconfig
	I0407 13:51:14.504964 1230577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/kubeconfig: {Name:mk712863958f7dbf2601dd82dc9ca7bea42ef42f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:51:14.505295 1230577 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.60 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0407 13:51:14.505337 1230577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0407 13:51:14.505352 1230577 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0407 13:51:14.505468 1230577 addons.go:69] Setting storage-provisioner=true in profile "bridge-056871"
	I0407 13:51:14.505488 1230577 addons.go:238] Setting addon storage-provisioner=true in "bridge-056871"
	I0407 13:51:14.505493 1230577 addons.go:69] Setting default-storageclass=true in profile "bridge-056871"
	I0407 13:51:14.505526 1230577 host.go:66] Checking if "bridge-056871" exists ...
	I0407 13:51:14.505535 1230577 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-056871"
	I0407 13:51:14.505611 1230577 config.go:182] Loaded profile config "bridge-056871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:51:14.506128 1230577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:51:14.506169 1230577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:51:14.506137 1230577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:51:14.506262 1230577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:51:14.507179 1230577 out.go:177] * Verifying Kubernetes components...
	I0407 13:51:14.508939 1230577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:51:14.528710 1230577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46503
	I0407 13:51:14.529591 1230577 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:51:14.530356 1230577 main.go:141] libmachine: Using API Version  1
	I0407 13:51:14.530392 1230577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:51:14.531236 1230577 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:51:14.531991 1230577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:51:14.532040 1230577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:51:14.532162 1230577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37415
	I0407 13:51:14.532729 1230577 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:51:14.533287 1230577 main.go:141] libmachine: Using API Version  1
	I0407 13:51:14.533321 1230577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:51:14.533838 1230577 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:51:14.534079 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetState
	I0407 13:51:14.538745 1230577 addons.go:238] Setting addon default-storageclass=true in "bridge-056871"
	I0407 13:51:14.538806 1230577 host.go:66] Checking if "bridge-056871" exists ...
	I0407 13:51:14.539192 1230577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:51:14.539253 1230577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:51:14.554829 1230577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41253
	I0407 13:51:14.555631 1230577 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:51:14.556333 1230577 main.go:141] libmachine: Using API Version  1
	I0407 13:51:14.556382 1230577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:51:14.556874 1230577 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:51:14.557119 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetState
	I0407 13:51:14.559774 1230577 main.go:141] libmachine: (bridge-056871) Calling .DriverName
	I0407 13:51:14.561700 1230577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34847
	I0407 13:51:14.562114 1230577 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 13:51:14.562326 1230577 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:51:14.562910 1230577 main.go:141] libmachine: Using API Version  1
	I0407 13:51:14.562944 1230577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:51:14.563469 1230577 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:51:14.563736 1230577 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 13:51:14.563756 1230577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0407 13:51:14.563775 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHHostname
	I0407 13:51:14.564155 1230577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:51:14.564224 1230577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:51:14.567924 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:51:14.568724 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:bridge-056871 Clientid:01:52:54:00:d9:a5:38}
	I0407 13:51:14.568762 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:51:14.569160 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHPort
	I0407 13:51:14.569440 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHKeyPath
	I0407 13:51:14.569994 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHUsername
	I0407 13:51:14.570256 1230577 sshutil.go:53] new ssh client: &{IP:192.168.50.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/bridge-056871/id_rsa Username:docker}
	I0407 13:51:14.583408 1230577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45805
	I0407 13:51:14.584133 1230577 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:51:14.584735 1230577 main.go:141] libmachine: Using API Version  1
	I0407 13:51:14.584766 1230577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:51:14.585231 1230577 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:51:14.585481 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetState
	I0407 13:51:14.587558 1230577 main.go:141] libmachine: (bridge-056871) Calling .DriverName
	I0407 13:51:14.587893 1230577 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0407 13:51:14.587916 1230577 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0407 13:51:14.587938 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHHostname
	I0407 13:51:14.591985 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:51:14.592534 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:bridge-056871 Clientid:01:52:54:00:d9:a5:38}
	I0407 13:51:14.592572 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:51:14.592825 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHPort
	I0407 13:51:14.593111 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHKeyPath
	I0407 13:51:14.593318 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHUsername
	I0407 13:51:14.593529 1230577 sshutil.go:53] new ssh client: &{IP:192.168.50.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/bridge-056871/id_rsa Username:docker}
	I0407 13:51:14.699702 1230577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0407 13:51:14.728176 1230577 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:51:14.870821 1230577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 13:51:14.891756 1230577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0407 13:51:15.124664 1230577 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0407 13:51:15.126051 1230577 node_ready.go:35] waiting up to 15m0s for node "bridge-056871" to be "Ready" ...
	I0407 13:51:15.138090 1230577 node_ready.go:49] node "bridge-056871" has status "Ready":"True"
	I0407 13:51:15.138128 1230577 node_ready.go:38] duration metric: took 12.039283ms for node "bridge-056871" to be "Ready" ...
	I0407 13:51:15.138140 1230577 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 13:51:15.144672 1230577 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-7hzbw" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:15.633193 1230577 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-056871" context rescaled to 1 replicas
	I0407 13:51:15.674183 1230577 main.go:141] libmachine: Making call to close driver server
	I0407 13:51:15.674220 1230577 main.go:141] libmachine: (bridge-056871) Calling .Close
	I0407 13:51:15.674224 1230577 main.go:141] libmachine: Making call to close driver server
	I0407 13:51:15.674246 1230577 main.go:141] libmachine: (bridge-056871) Calling .Close
	I0407 13:51:15.674592 1230577 main.go:141] libmachine: Successfully made call to close driver server
	I0407 13:51:15.674621 1230577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 13:51:15.674632 1230577 main.go:141] libmachine: Making call to close driver server
	I0407 13:51:15.674642 1230577 main.go:141] libmachine: (bridge-056871) Calling .Close
	I0407 13:51:15.674658 1230577 main.go:141] libmachine: Successfully made call to close driver server
	I0407 13:51:15.674678 1230577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 13:51:15.674692 1230577 main.go:141] libmachine: Making call to close driver server
	I0407 13:51:15.674703 1230577 main.go:141] libmachine: (bridge-056871) Calling .Close
	I0407 13:51:15.674707 1230577 main.go:141] libmachine: (bridge-056871) DBG | Closing plugin on server side
	I0407 13:51:15.674897 1230577 main.go:141] libmachine: Successfully made call to close driver server
	I0407 13:51:15.674919 1230577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 13:51:15.675007 1230577 main.go:141] libmachine: Successfully made call to close driver server
	I0407 13:51:15.675028 1230577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 13:51:15.675048 1230577 main.go:141] libmachine: (bridge-056871) DBG | Closing plugin on server side
	I0407 13:51:15.704715 1230577 main.go:141] libmachine: Making call to close driver server
	I0407 13:51:15.704742 1230577 main.go:141] libmachine: (bridge-056871) Calling .Close
	I0407 13:51:15.705311 1230577 main.go:141] libmachine: (bridge-056871) DBG | Closing plugin on server side
	I0407 13:51:15.705350 1230577 main.go:141] libmachine: Successfully made call to close driver server
	I0407 13:51:15.705370 1230577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 13:51:15.707869 1230577 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0407 13:51:15.709482 1230577 addons.go:514] duration metric: took 1.204112692s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0407 13:51:17.151557 1230577 pod_ready.go:103] pod "coredns-668d6bf9bc-7hzbw" in "kube-system" namespace has status "Ready":"False"
	I0407 13:51:19.651176 1230577 pod_ready.go:103] pod "coredns-668d6bf9bc-7hzbw" in "kube-system" namespace has status "Ready":"False"
	I0407 13:51:22.152477 1230577 pod_ready.go:103] pod "coredns-668d6bf9bc-7hzbw" in "kube-system" namespace has status "Ready":"False"
	I0407 13:51:24.652542 1230577 pod_ready.go:103] pod "coredns-668d6bf9bc-7hzbw" in "kube-system" namespace has status "Ready":"False"
	I0407 13:51:26.152821 1230577 pod_ready.go:98] pod "coredns-668d6bf9bc-7hzbw" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-07 13:51:26 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-07 13:51:14 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-07 13:51:14 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-07 13:51:14 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-07 13:51:14 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.50.60 HostIPs:[{IP:192.168.50.
60}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-04-07 13:51:14 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-04-07 13:51:15 +0000 UTC,FinishedAt:2025-04-07 13:51:25 +0000 UTC,ContainerID:cri-o://ca05d83a89ad284ae2de0937d1b89ae0dc71b45c0edaa12e93727b3e5adc2247,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://ca05d83a89ad284ae2de0937d1b89ae0dc71b45c0edaa12e93727b3e5adc2247 Started:0xc0019098a0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc00262bf00} {Name:kube-api-access-tcbc5 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc00262bf10}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0407 13:51:26.152864 1230577 pod_ready.go:82] duration metric: took 11.008143482s for pod "coredns-668d6bf9bc-7hzbw" in "kube-system" namespace to be "Ready" ...
	E0407 13:51:26.152881 1230577 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-668d6bf9bc-7hzbw" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-07 13:51:26 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-07 13:51:14 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-07 13:51:14 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-07 13:51:14 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-07 13:51:14 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.5
0.60 HostIPs:[{IP:192.168.50.60}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-04-07 13:51:14 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-04-07 13:51:15 +0000 UTC,FinishedAt:2025-04-07 13:51:25 +0000 UTC,ContainerID:cri-o://ca05d83a89ad284ae2de0937d1b89ae0dc71b45c0edaa12e93727b3e5adc2247,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://ca05d83a89ad284ae2de0937d1b89ae0dc71b45c0edaa12e93727b3e5adc2247 Started:0xc0019098a0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc00262bf00} {Name:kube-api-access-tcbc5 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc00262bf10}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0407 13:51:26.152921 1230577 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-nld4f" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:26.158910 1230577 pod_ready.go:93] pod "coredns-668d6bf9bc-nld4f" in "kube-system" namespace has status "Ready":"True"
	I0407 13:51:26.158944 1230577 pod_ready.go:82] duration metric: took 6.010021ms for pod "coredns-668d6bf9bc-nld4f" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:26.158961 1230577 pod_ready.go:79] waiting up to 15m0s for pod "etcd-bridge-056871" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:26.167441 1230577 pod_ready.go:93] pod "etcd-bridge-056871" in "kube-system" namespace has status "Ready":"True"
	I0407 13:51:26.167469 1230577 pod_ready.go:82] duration metric: took 8.500134ms for pod "etcd-bridge-056871" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:26.167479 1230577 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-bridge-056871" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:26.173495 1230577 pod_ready.go:93] pod "kube-apiserver-bridge-056871" in "kube-system" namespace has status "Ready":"True"
	I0407 13:51:26.173537 1230577 pod_ready.go:82] duration metric: took 6.051921ms for pod "kube-apiserver-bridge-056871" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:26.173549 1230577 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-bridge-056871" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:26.178528 1230577 pod_ready.go:93] pod "kube-controller-manager-bridge-056871" in "kube-system" namespace has status "Ready":"True"
	I0407 13:51:26.178554 1230577 pod_ready.go:82] duration metric: took 4.998894ms for pod "kube-controller-manager-bridge-056871" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:26.178567 1230577 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-2ftsv" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:26.549366 1230577 pod_ready.go:93] pod "kube-proxy-2ftsv" in "kube-system" namespace has status "Ready":"True"
	I0407 13:51:26.549405 1230577 pod_ready.go:82] duration metric: took 370.829414ms for pod "kube-proxy-2ftsv" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:26.549421 1230577 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-bridge-056871" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:26.949220 1230577 pod_ready.go:93] pod "kube-scheduler-bridge-056871" in "kube-system" namespace has status "Ready":"True"
	I0407 13:51:26.949257 1230577 pod_ready.go:82] duration metric: took 399.827446ms for pod "kube-scheduler-bridge-056871" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:26.949270 1230577 pod_ready.go:39] duration metric: took 11.811111346s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 13:51:26.949297 1230577 api_server.go:52] waiting for apiserver process to appear ...
	I0407 13:51:26.949366 1230577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:51:26.965744 1230577 api_server.go:72] duration metric: took 12.46041284s to wait for apiserver process to appear ...
	I0407 13:51:26.965775 1230577 api_server.go:88] waiting for apiserver healthz status ...
	I0407 13:51:26.965799 1230577 api_server.go:253] Checking apiserver healthz at https://192.168.50.60:8443/healthz ...
	I0407 13:51:26.971830 1230577 api_server.go:279] https://192.168.50.60:8443/healthz returned 200:
	ok
	I0407 13:51:26.973406 1230577 api_server.go:141] control plane version: v1.32.2
	I0407 13:51:26.973456 1230577 api_server.go:131] duration metric: took 7.670225ms to wait for apiserver health ...
	I0407 13:51:26.973471 1230577 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 13:51:27.151802 1230577 system_pods.go:59] 7 kube-system pods found
	I0407 13:51:27.151854 1230577 system_pods.go:61] "coredns-668d6bf9bc-nld4f" [3887bc68-10af-41c6-bf18-2deca678221c] Running
	I0407 13:51:27.151864 1230577 system_pods.go:61] "etcd-bridge-056871" [6fbbd69f-41ab-4e93-adfe-653b3df252db] Running
	I0407 13:51:27.151871 1230577 system_pods.go:61] "kube-apiserver-bridge-056871" [ca1ffe69-93d4-4bd9-a8ce-459be6f7f9c5] Running
	I0407 13:51:27.151877 1230577 system_pods.go:61] "kube-controller-manager-bridge-056871" [c3b30836-3a6c-4248-a79b-28e7586e6353] Running
	I0407 13:51:27.151882 1230577 system_pods.go:61] "kube-proxy-2ftsv" [8e02d336-d190-4428-8bdd-88bf28e0b4bc] Running
	I0407 13:51:27.151887 1230577 system_pods.go:61] "kube-scheduler-bridge-056871" [b502f2c1-5cdc-49e0-b66d-ae0a1363f03e] Running
	I0407 13:51:27.151892 1230577 system_pods.go:61] "storage-provisioner" [96d93b22-4965-46a2-83c8-d7742fa76b6a] Running
	I0407 13:51:27.151902 1230577 system_pods.go:74] duration metric: took 178.422752ms to wait for pod list to return data ...
	I0407 13:51:27.151928 1230577 default_sa.go:34] waiting for default service account to be created ...
	I0407 13:51:27.349651 1230577 default_sa.go:45] found service account: "default"
	I0407 13:51:27.349688 1230577 default_sa.go:55] duration metric: took 197.749966ms for default service account to be created ...
	I0407 13:51:27.349737 1230577 system_pods.go:116] waiting for k8s-apps to be running ...
	I0407 13:51:27.559111 1230577 system_pods.go:86] 7 kube-system pods found
	I0407 13:51:27.559145 1230577 system_pods.go:89] "coredns-668d6bf9bc-nld4f" [3887bc68-10af-41c6-bf18-2deca678221c] Running
	I0407 13:51:27.559151 1230577 system_pods.go:89] "etcd-bridge-056871" [6fbbd69f-41ab-4e93-adfe-653b3df252db] Running
	I0407 13:51:27.559155 1230577 system_pods.go:89] "kube-apiserver-bridge-056871" [ca1ffe69-93d4-4bd9-a8ce-459be6f7f9c5] Running
	I0407 13:51:27.559159 1230577 system_pods.go:89] "kube-controller-manager-bridge-056871" [c3b30836-3a6c-4248-a79b-28e7586e6353] Running
	I0407 13:51:27.559164 1230577 system_pods.go:89] "kube-proxy-2ftsv" [8e02d336-d190-4428-8bdd-88bf28e0b4bc] Running
	I0407 13:51:27.559167 1230577 system_pods.go:89] "kube-scheduler-bridge-056871" [b502f2c1-5cdc-49e0-b66d-ae0a1363f03e] Running
	I0407 13:51:27.559170 1230577 system_pods.go:89] "storage-provisioner" [96d93b22-4965-46a2-83c8-d7742fa76b6a] Running
	I0407 13:51:27.559177 1230577 system_pods.go:126] duration metric: took 209.432386ms to wait for k8s-apps to be running ...
	I0407 13:51:27.559185 1230577 system_svc.go:44] waiting for kubelet service to be running ....
	I0407 13:51:27.559240 1230577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:51:27.575563 1230577 system_svc.go:56] duration metric: took 16.359151ms WaitForService to wait for kubelet
	I0407 13:51:27.575606 1230577 kubeadm.go:582] duration metric: took 13.070278894s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 13:51:27.575627 1230577 node_conditions.go:102] verifying NodePressure condition ...
	I0407 13:51:27.749256 1230577 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 13:51:27.749291 1230577 node_conditions.go:123] node cpu capacity is 2
	I0407 13:51:27.749305 1230577 node_conditions.go:105] duration metric: took 173.672077ms to run NodePressure ...
	I0407 13:51:27.749318 1230577 start.go:241] waiting for startup goroutines ...
	I0407 13:51:27.749326 1230577 start.go:246] waiting for cluster config update ...
	I0407 13:51:27.749341 1230577 start.go:255] writing updated cluster config ...
	I0407 13:51:27.749652 1230577 ssh_runner.go:195] Run: rm -f paused
	I0407 13:51:27.803571 1230577 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0407 13:51:27.807117 1230577 out.go:177] * Done! kubectl is now configured to use "bridge-056871" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 07 13:52:35 old-k8s-version-435730 crio[627]: time="2025-04-07 13:52:35.907841522Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744033955907816576,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=42581aef-a848-46b1-a04d-3a5e8bde895c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:52:35 old-k8s-version-435730 crio[627]: time="2025-04-07 13:52:35.908381457Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5b4b32c5-0138-4cf9-8e47-8ff22e99be78 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:52:35 old-k8s-version-435730 crio[627]: time="2025-04-07 13:52:35.908450378Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5b4b32c5-0138-4cf9-8e47-8ff22e99be78 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:52:35 old-k8s-version-435730 crio[627]: time="2025-04-07 13:52:35.908486022Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5b4b32c5-0138-4cf9-8e47-8ff22e99be78 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:52:35 old-k8s-version-435730 crio[627]: time="2025-04-07 13:52:35.941329294Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1769abdb-6434-4fc7-8ccc-3fc957d91efc name=/runtime.v1.RuntimeService/Version
	Apr 07 13:52:35 old-k8s-version-435730 crio[627]: time="2025-04-07 13:52:35.941400708Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1769abdb-6434-4fc7-8ccc-3fc957d91efc name=/runtime.v1.RuntimeService/Version
	Apr 07 13:52:35 old-k8s-version-435730 crio[627]: time="2025-04-07 13:52:35.944166200Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=28d5745a-3481-436c-b250-d1c6dbd7cd93 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:52:35 old-k8s-version-435730 crio[627]: time="2025-04-07 13:52:35.944544909Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744033955944521034,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=28d5745a-3481-436c-b250-d1c6dbd7cd93 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:52:35 old-k8s-version-435730 crio[627]: time="2025-04-07 13:52:35.945422421Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ba193a1d-8495-4ce0-b737-be5171cbe11a name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:52:35 old-k8s-version-435730 crio[627]: time="2025-04-07 13:52:35.945480945Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ba193a1d-8495-4ce0-b737-be5171cbe11a name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:52:35 old-k8s-version-435730 crio[627]: time="2025-04-07 13:52:35.945516723Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ba193a1d-8495-4ce0-b737-be5171cbe11a name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:52:35 old-k8s-version-435730 crio[627]: time="2025-04-07 13:52:35.982493864Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fca05af0-04bf-4461-8464-71ad5fe87b9d name=/runtime.v1.RuntimeService/Version
	Apr 07 13:52:35 old-k8s-version-435730 crio[627]: time="2025-04-07 13:52:35.982568328Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fca05af0-04bf-4461-8464-71ad5fe87b9d name=/runtime.v1.RuntimeService/Version
	Apr 07 13:52:35 old-k8s-version-435730 crio[627]: time="2025-04-07 13:52:35.983871281Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b8c8855a-d620-4887-9262-29362bb699a1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:52:35 old-k8s-version-435730 crio[627]: time="2025-04-07 13:52:35.984244173Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744033955984220482,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b8c8855a-d620-4887-9262-29362bb699a1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:52:35 old-k8s-version-435730 crio[627]: time="2025-04-07 13:52:35.984831742Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b00494e3-fb2b-4469-9694-c8c2f8608acf name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:52:35 old-k8s-version-435730 crio[627]: time="2025-04-07 13:52:35.984882839Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b00494e3-fb2b-4469-9694-c8c2f8608acf name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:52:35 old-k8s-version-435730 crio[627]: time="2025-04-07 13:52:35.984915880Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b00494e3-fb2b-4469-9694-c8c2f8608acf name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:52:36 old-k8s-version-435730 crio[627]: time="2025-04-07 13:52:36.019458989Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5ce3976b-14e6-432d-9544-d116d53c51eb name=/runtime.v1.RuntimeService/Version
	Apr 07 13:52:36 old-k8s-version-435730 crio[627]: time="2025-04-07 13:52:36.019542548Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5ce3976b-14e6-432d-9544-d116d53c51eb name=/runtime.v1.RuntimeService/Version
	Apr 07 13:52:36 old-k8s-version-435730 crio[627]: time="2025-04-07 13:52:36.020948199Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b9a564a4-9841-4b3d-a34a-2bb2c10f0b48 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:52:36 old-k8s-version-435730 crio[627]: time="2025-04-07 13:52:36.021344257Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744033956021317237,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b9a564a4-9841-4b3d-a34a-2bb2c10f0b48 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:52:36 old-k8s-version-435730 crio[627]: time="2025-04-07 13:52:36.022236140Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=29f7253e-8060-4da8-9945-7a363e2a967e name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:52:36 old-k8s-version-435730 crio[627]: time="2025-04-07 13:52:36.022293262Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=29f7253e-8060-4da8-9945-7a363e2a967e name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:52:36 old-k8s-version-435730 crio[627]: time="2025-04-07 13:52:36.022332934Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=29f7253e-8060-4da8-9945-7a363e2a967e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr 7 13:35] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054242] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042193] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.063295] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.360663] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.644180] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.768056] systemd-fstab-generator[555]: Ignoring "noauto" option for root device
	[  +0.063980] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066857] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.213937] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.125997] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.267658] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +7.911500] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.062381] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.305323] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[ +10.593685] kauditd_printk_skb: 46 callbacks suppressed
	[Apr 7 13:39] systemd-fstab-generator[4897]: Ignoring "noauto" option for root device
	[Apr 7 13:41] systemd-fstab-generator[5176]: Ignoring "noauto" option for root device
	[  +0.065808] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 13:52:36 up 17 min,  0 users,  load average: 0.00, 0.06, 0.07
	Linux old-k8s-version-435730 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 07 13:52:34 old-k8s-version-435730 kubelet[6338]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Apr 07 13:52:34 old-k8s-version-435730 kubelet[6338]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000bfba40, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000afdce0, 0x24, 0x0, ...)
	Apr 07 13:52:34 old-k8s-version-435730 kubelet[6338]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Apr 07 13:52:34 old-k8s-version-435730 kubelet[6338]: net.(*Dialer).DialContext(0xc000189f80, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000afdce0, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 07 13:52:34 old-k8s-version-435730 kubelet[6338]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Apr 07 13:52:34 old-k8s-version-435730 kubelet[6338]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000a2ae20, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000afdce0, 0x24, 0x60, 0x7f683054a8d8, 0x118, ...)
	Apr 07 13:52:34 old-k8s-version-435730 kubelet[6338]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Apr 07 13:52:34 old-k8s-version-435730 kubelet[6338]: net/http.(*Transport).dial(0xc000776dc0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000afdce0, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 07 13:52:34 old-k8s-version-435730 kubelet[6338]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Apr 07 13:52:34 old-k8s-version-435730 kubelet[6338]: net/http.(*Transport).dialConn(0xc000776dc0, 0x4f7fe00, 0xc000120018, 0x0, 0xc0007923c0, 0x5, 0xc000afdce0, 0x24, 0x0, 0xc000c106c0, ...)
	Apr 07 13:52:34 old-k8s-version-435730 kubelet[6338]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Apr 07 13:52:34 old-k8s-version-435730 kubelet[6338]: net/http.(*Transport).dialConnFor(0xc000776dc0, 0xc000be22c0)
	Apr 07 13:52:34 old-k8s-version-435730 kubelet[6338]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Apr 07 13:52:34 old-k8s-version-435730 kubelet[6338]: created by net/http.(*Transport).queueForDial
	Apr 07 13:52:34 old-k8s-version-435730 kubelet[6338]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Apr 07 13:52:34 old-k8s-version-435730 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 07 13:52:34 old-k8s-version-435730 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 07 13:52:34 old-k8s-version-435730 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 113.
	Apr 07 13:52:34 old-k8s-version-435730 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 07 13:52:34 old-k8s-version-435730 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 07 13:52:34 old-k8s-version-435730 kubelet[6347]: I0407 13:52:34.783046    6347 server.go:416] Version: v1.20.0
	Apr 07 13:52:34 old-k8s-version-435730 kubelet[6347]: I0407 13:52:34.783450    6347 server.go:837] Client rotation is on, will bootstrap in background
	Apr 07 13:52:34 old-k8s-version-435730 kubelet[6347]: I0407 13:52:34.785579    6347 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 07 13:52:34 old-k8s-version-435730 kubelet[6347]: W0407 13:52:34.786595    6347 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 07 13:52:34 old-k8s-version-435730 kubelet[6347]: I0407 13:52:34.787405    6347 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-435730 -n old-k8s-version-435730
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-435730 -n old-k8s-version-435730: exit status 2 (264.355237ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-435730" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (387.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:52:41.492286 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kindnet-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:52:41.498959 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kindnet-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:52:41.510550 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kindnet-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:52:41.532121 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kindnet-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:52:41.573782 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kindnet-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:52:41.655391 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kindnet-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:52:41.817601 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kindnet-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:52:42.139514 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kindnet-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:52:42.781920 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kindnet-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:52:44.064328 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kindnet-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:52:44.487324 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/auto-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:52:46.626132 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kindnet-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:52:51.748080 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kindnet-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:53:01.990361 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kindnet-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:53:21.692268 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/calico-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:53:21.698930 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/calico-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:53:21.710529 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/calico-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:53:21.732082 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/calico-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:53:21.773685 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/calico-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:53:21.855280 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/calico-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:53:22.017002 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/calico-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:53:22.338981 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/calico-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:53:22.471854 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kindnet-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:53:22.980420 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/calico-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:53:24.262431 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/calico-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:53:26.824598 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/calico-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:53:31.946618 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/calico-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:53:42.188179 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/calico-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:54:02.669561 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/calico-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:54:03.434155 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kindnet-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:54:06.408732 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/auto-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:54:14.719591 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/default-k8s-diff-port-405061/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:54:14.726077 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/default-k8s-diff-port-405061/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:54:14.737658 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/default-k8s-diff-port-405061/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:54:14.759277 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/default-k8s-diff-port-405061/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:54:14.800900 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/default-k8s-diff-port-405061/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:54:14.882680 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/default-k8s-diff-port-405061/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:54:15.044964 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/default-k8s-diff-port-405061/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:54:15.366810 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/default-k8s-diff-port-405061/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:54:16.009088 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/default-k8s-diff-port-405061/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:54:17.290703 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/default-k8s-diff-port-405061/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:54:19.852825 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/default-k8s-diff-port-405061/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:54:24.974509 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/default-k8s-diff-port-405061/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:54:32.943863 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/custom-flannel-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:54:32.950351 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/custom-flannel-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:54:32.961861 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/custom-flannel-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:54:32.983441 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/custom-flannel-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:54:33.025015 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/custom-flannel-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:54:33.106627 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/custom-flannel-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:54:33.268355 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/custom-flannel-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:54:33.590234 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/custom-flannel-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:54:34.232613 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/custom-flannel-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:54:35.216712 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/default-k8s-diff-port-405061/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:54:35.514696 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/custom-flannel-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:54:38.076134 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/custom-flannel-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:54:40.604600 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/no-preload-028452/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:54:43.198530 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/custom-flannel-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:54:43.631563 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/calico-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:54:53.440283 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/custom-flannel-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:54:55.698601 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/default-k8s-diff-port-405061/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:54:57.744138 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/enable-default-cni-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:54:57.750629 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/enable-default-cni-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:54:57.762082 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/enable-default-cni-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:54:57.783619 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/enable-default-cni-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:54:57.825140 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/enable-default-cni-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:54:57.906732 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/enable-default-cni-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:54:58.068469 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/enable-default-cni-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:54:58.390335 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/enable-default-cni-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:54:59.032680 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/enable-default-cni-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:55:00.314542 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/enable-default-cni-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:55:02.876401 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/enable-default-cni-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:55:07.998626 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/enable-default-cni-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:55:13.921776 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/custom-flannel-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:55:18.240256 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/enable-default-cni-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:55:25.356146 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kindnet-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:55:36.660225 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/default-k8s-diff-port-405061/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:55:38.722110 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/enable-default-cni-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:55:54.883776 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/custom-flannel-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:56:05.553107 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/calico-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:56:09.343451 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/functional-728898/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:56:13.910524 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:56:13.917028 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:56:13.928578 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:56:13.950123 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:56:13.991713 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:56:14.073380 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:56:14.235109 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:56:14.556951 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:56:15.198622 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:56:16.480344 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:56:19.042431 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:56:19.683826 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/enable-default-cni-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:56:22.546985 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/auto-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:56:24.164288 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:56:28.425564 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:56:28.432062 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:56:28.443476 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:56:28.465048 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:56:28.506641 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:56:28.588228 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:56:28.749764 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:56:29.071598 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:56:29.713812 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:56:30.995566 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:56:33.556912 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:56:34.406377 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:56:38.679132 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:56:48.921059 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:56:50.250983 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/auto-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:56:54.887933 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:56:58.582070 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/default-k8s-diff-port-405061/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:57:04.915652 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:57:09.403259 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:57:16.805205 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/custom-flannel-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:57:35.850121 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:57:41.492539 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kindnet-056871/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:57:41.606171 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/enable-default-cni-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:57:50.365371 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:58:09.198546 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/kindnet-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:58:21.692169 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/calico-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:58:49.395503 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/calico-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
E0407 13:58:57.771568 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.211:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-435730 -n old-k8s-version-435730
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-435730 -n old-k8s-version-435730: exit status 2 (249.570031ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "old-k8s-version-435730" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-435730 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-435730 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.032µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-435730 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-435730 -n old-k8s-version-435730
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-435730 -n old-k8s-version-435730: exit status 2 (239.92251ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-435730 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-056871 sudo iptables                       | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | -t nat -L -n -v                                      |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo                                | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | systemctl status kubelet --all                       |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo                                | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | systemctl cat kubelet                                |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo                                | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | journalctl -xeu kubelet --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo cat                            | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo cat                            | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo                                | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC |                     |
	|         | systemctl status docker --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo                                | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | systemctl cat docker                                 |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo cat                            | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo docker                         | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo                                | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC |                     |
	|         | systemctl status cri-docker                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo                                | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | systemctl cat cri-docker                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo cat                            | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo cat                            | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo                                | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo                                | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC |                     |
	|         | systemctl status containerd                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo                                | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | systemctl cat containerd                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo cat                            | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo cat                            | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo                                | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | containerd config dump                               |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo                                | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | systemctl status crio --all                          |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo                                | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | systemctl cat crio --no-pager                        |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo find                           | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p bridge-056871 sudo crio                           | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p bridge-056871                                     | bridge-056871 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 13:50:24
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 13:50:24.272012 1230577 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:50:24.272287 1230577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:50:24.272296 1230577 out.go:358] Setting ErrFile to fd 2...
	I0407 13:50:24.272301 1230577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:50:24.272500 1230577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1162386/.minikube/bin
	I0407 13:50:24.273223 1230577 out.go:352] Setting JSON to false
	I0407 13:50:24.274746 1230577 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":19968,"bootTime":1744013856,"procs":298,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 13:50:24.274894 1230577 start.go:139] virtualization: kvm guest
	I0407 13:50:24.277205 1230577 out.go:177] * [bridge-056871] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 13:50:24.278700 1230577 notify.go:220] Checking for updates...
	I0407 13:50:24.278732 1230577 out.go:177]   - MINIKUBE_LOCATION=20602
	I0407 13:50:24.280213 1230577 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 13:50:24.281730 1230577 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20602-1162386/kubeconfig
	I0407 13:50:24.283144 1230577 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1162386/.minikube
	I0407 13:50:24.284452 1230577 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 13:50:24.286197 1230577 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 13:50:24.288646 1230577 config.go:182] Loaded profile config "default-k8s-diff-port-405061": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:50:24.288784 1230577 config.go:182] Loaded profile config "flannel-056871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:50:24.288884 1230577 config.go:182] Loaded profile config "old-k8s-version-435730": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0407 13:50:24.289053 1230577 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 13:50:24.333039 1230577 out.go:177] * Using the kvm2 driver based on user configuration
	I0407 13:50:24.334437 1230577 start.go:297] selected driver: kvm2
	I0407 13:50:24.334487 1230577 start.go:901] validating driver "kvm2" against <nil>
	I0407 13:50:24.334505 1230577 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 13:50:24.336072 1230577 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 13:50:24.336312 1230577 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20602-1162386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0407 13:50:24.356560 1230577 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0407 13:50:24.356627 1230577 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 13:50:24.356862 1230577 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 13:50:24.356904 1230577 cni.go:84] Creating CNI manager for "bridge"
	I0407 13:50:24.356910 1230577 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0407 13:50:24.356958 1230577 start.go:340] cluster config:
	{Name:bridge-056871 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-056871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:50:24.357078 1230577 iso.go:125] acquiring lock: {Name:mk51e1827709f7a3810dbd898083f8185ece65eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 13:50:24.359372 1230577 out.go:177] * Starting "bridge-056871" primary control-plane node in "bridge-056871" cluster
	I0407 13:50:24.906910 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:24.907631 1229086 main.go:141] libmachine: (flannel-056871) found domain IP: 192.168.61.247
	I0407 13:50:24.907651 1229086 main.go:141] libmachine: (flannel-056871) reserving static IP address...
	I0407 13:50:24.907669 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has current primary IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:24.908143 1229086 main.go:141] libmachine: (flannel-056871) DBG | unable to find host DHCP lease matching {name: "flannel-056871", mac: "52:54:00:b2:bb:50", ip: "192.168.61.247"} in network mk-flannel-056871
	I0407 13:50:25.024395 1229086 main.go:141] libmachine: (flannel-056871) DBG | Getting to WaitForSSH function...
	I0407 13:50:25.024431 1229086 main.go:141] libmachine: (flannel-056871) reserved static IP address 192.168.61.247 for domain flannel-056871
	I0407 13:50:25.024445 1229086 main.go:141] libmachine: (flannel-056871) waiting for SSH...
	I0407 13:50:25.028256 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:25.029260 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:25.029293 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:25.029469 1229086 main.go:141] libmachine: (flannel-056871) DBG | Using SSH client type: external
	I0407 13:50:25.029496 1229086 main.go:141] libmachine: (flannel-056871) DBG | Using SSH private key: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/flannel-056871/id_rsa (-rw-------)
	I0407 13:50:25.029527 1229086 main.go:141] libmachine: (flannel-056871) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.247 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/flannel-056871/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0407 13:50:25.029538 1229086 main.go:141] libmachine: (flannel-056871) DBG | About to run SSH command:
	I0407 13:50:25.029547 1229086 main.go:141] libmachine: (flannel-056871) DBG | exit 0
	I0407 13:50:25.158823 1229086 main.go:141] libmachine: (flannel-056871) DBG | SSH cmd err, output: <nil>: 
	I0407 13:50:25.159177 1229086 main.go:141] libmachine: (flannel-056871) KVM machine creation complete
	I0407 13:50:25.159481 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetConfigRaw
	I0407 13:50:25.160052 1229086 main.go:141] libmachine: (flannel-056871) Calling .DriverName
	I0407 13:50:25.160271 1229086 main.go:141] libmachine: (flannel-056871) Calling .DriverName
	I0407 13:50:25.160437 1229086 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0407 13:50:25.160453 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetState
	I0407 13:50:25.161976 1229086 main.go:141] libmachine: Detecting operating system of created instance...
	I0407 13:50:25.162002 1229086 main.go:141] libmachine: Waiting for SSH to be available...
	I0407 13:50:25.162010 1229086 main.go:141] libmachine: Getting to WaitForSSH function...
	I0407 13:50:25.162019 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHHostname
	I0407 13:50:25.164297 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:25.164661 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:flannel-056871 Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:25.164683 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:25.164814 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHPort
	I0407 13:50:25.165029 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHKeyPath
	I0407 13:50:25.165212 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHKeyPath
	I0407 13:50:25.165340 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHUsername
	I0407 13:50:25.165519 1229086 main.go:141] libmachine: Using SSH client type: native
	I0407 13:50:25.165759 1229086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.247 22 <nil> <nil>}
	I0407 13:50:25.165770 1229086 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0407 13:50:25.273616 1229086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 13:50:25.273644 1229086 main.go:141] libmachine: Detecting the provisioner...
	I0407 13:50:25.273653 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHHostname
	I0407 13:50:25.276517 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:25.276907 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:flannel-056871 Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:25.276943 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:25.277203 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHPort
	I0407 13:50:25.277496 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHKeyPath
	I0407 13:50:25.277725 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHKeyPath
	I0407 13:50:25.277890 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHUsername
	I0407 13:50:25.278114 1229086 main.go:141] libmachine: Using SSH client type: native
	I0407 13:50:25.278425 1229086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.247 22 <nil> <nil>}
	I0407 13:50:25.278446 1229086 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0407 13:50:25.390765 1229086 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0407 13:50:25.390837 1229086 main.go:141] libmachine: found compatible host: buildroot
	I0407 13:50:25.390846 1229086 main.go:141] libmachine: Provisioning with buildroot...
	I0407 13:50:25.390854 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetMachineName
	I0407 13:50:25.391167 1229086 buildroot.go:166] provisioning hostname "flannel-056871"
	I0407 13:50:25.391215 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetMachineName
	I0407 13:50:25.391404 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHHostname
	I0407 13:50:25.394505 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:25.394908 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:flannel-056871 Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:25.394932 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:25.395153 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHPort
	I0407 13:50:25.395368 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHKeyPath
	I0407 13:50:25.395527 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHKeyPath
	I0407 13:50:25.395695 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHUsername
	I0407 13:50:25.395886 1229086 main.go:141] libmachine: Using SSH client type: native
	I0407 13:50:25.396185 1229086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.247 22 <nil> <nil>}
	I0407 13:50:25.396205 1229086 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-056871 && echo "flannel-056871" | sudo tee /etc/hostname
	I0407 13:50:25.523408 1229086 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-056871
	
	I0407 13:50:25.523441 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHHostname
	I0407 13:50:25.526729 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:25.527159 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:flannel-056871 Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:25.527192 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:25.527433 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHPort
	I0407 13:50:25.527628 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHKeyPath
	I0407 13:50:25.527816 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHKeyPath
	I0407 13:50:25.527951 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHUsername
	I0407 13:50:25.528116 1229086 main.go:141] libmachine: Using SSH client type: native
	I0407 13:50:25.528345 1229086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.247 22 <nil> <nil>}
	I0407 13:50:25.528367 1229086 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-056871' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-056871/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-056871' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 13:50:25.648639 1229086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 13:50:25.648677 1229086 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20602-1162386/.minikube CaCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20602-1162386/.minikube}
	I0407 13:50:25.648737 1229086 buildroot.go:174] setting up certificates
	I0407 13:50:25.648757 1229086 provision.go:84] configureAuth start
	I0407 13:50:25.648776 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetMachineName
	I0407 13:50:25.649122 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetIP
	I0407 13:50:25.652794 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:25.653250 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:flannel-056871 Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:25.653279 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:25.653553 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHHostname
	I0407 13:50:25.658448 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:25.659018 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:flannel-056871 Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:25.659051 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:25.659449 1229086 provision.go:143] copyHostCerts
	I0407 13:50:25.659521 1229086 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem, removing ...
	I0407 13:50:25.659542 1229086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem
	I0407 13:50:25.659632 1229086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem (1123 bytes)
	I0407 13:50:25.659734 1229086 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem, removing ...
	I0407 13:50:25.659744 1229086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem
	I0407 13:50:25.659768 1229086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem (1675 bytes)
	I0407 13:50:25.659872 1229086 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem, removing ...
	I0407 13:50:25.659884 1229086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem
	I0407 13:50:25.659922 1229086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem (1078 bytes)
	I0407 13:50:25.659989 1229086 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem org=jenkins.flannel-056871 san=[127.0.0.1 192.168.61.247 flannel-056871 localhost minikube]
	I0407 13:50:25.947030 1229086 provision.go:177] copyRemoteCerts
	I0407 13:50:25.947100 1229086 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 13:50:25.947132 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHHostname
	I0407 13:50:25.950246 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:25.950566 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:flannel-056871 Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:25.950592 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:25.950777 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHPort
	I0407 13:50:25.951048 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHKeyPath
	I0407 13:50:25.951245 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHUsername
	I0407 13:50:25.951386 1229086 sshutil.go:53] new ssh client: &{IP:192.168.61.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/flannel-056871/id_rsa Username:docker}
	I0407 13:50:26.036721 1229086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 13:50:26.062992 1229086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0407 13:50:26.090226 1229086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0407 13:50:26.118689 1229086 provision.go:87] duration metric: took 469.909903ms to configureAuth
	I0407 13:50:26.118728 1229086 buildroot.go:189] setting minikube options for container-runtime
	I0407 13:50:26.118898 1229086 config.go:182] Loaded profile config "flannel-056871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:50:26.118988 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHHostname
	I0407 13:50:24.360978 1230577 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 13:50:24.361073 1230577 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0407 13:50:24.361091 1230577 cache.go:56] Caching tarball of preloaded images
	I0407 13:50:24.361317 1230577 preload.go:172] Found /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0407 13:50:24.361343 1230577 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0407 13:50:24.361473 1230577 profile.go:143] Saving config to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/config.json ...
	I0407 13:50:24.361497 1230577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/config.json: {Name:mk787730f7bdbd4b7af3de86222cd95141114af0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:50:24.361669 1230577 start.go:360] acquireMachinesLock for bridge-056871: {Name:mk51d4c744fa92d56cf6ac11b1e792c85ef6709a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 13:50:26.622792 1230577 start.go:364] duration metric: took 2.261094169s to acquireMachinesLock for "bridge-056871"
	I0407 13:50:26.622881 1230577 start.go:93] Provisioning new machine with config: &{Name:bridge-056871 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:bridge-056871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0407 13:50:26.623069 1230577 start.go:125] createHost starting for "" (driver="kvm2")
	I0407 13:50:24.235049 1220973 pod_ready.go:103] pod "metrics-server-f79f97bbb-m78vh" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:26.732867 1220973 pod_ready.go:103] pod "metrics-server-f79f97bbb-m78vh" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:26.122223 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:26.122584 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:flannel-056871 Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:26.122620 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:26.122805 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHPort
	I0407 13:50:26.123089 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHKeyPath
	I0407 13:50:26.123271 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHKeyPath
	I0407 13:50:26.123429 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHUsername
	I0407 13:50:26.123561 1229086 main.go:141] libmachine: Using SSH client type: native
	I0407 13:50:26.123760 1229086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.247 22 <nil> <nil>}
	I0407 13:50:26.123774 1229086 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0407 13:50:26.364755 1229086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0407 13:50:26.364789 1229086 main.go:141] libmachine: Checking connection to Docker...
	I0407 13:50:26.364800 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetURL
	I0407 13:50:26.366289 1229086 main.go:141] libmachine: (flannel-056871) DBG | using libvirt version 6000000
	I0407 13:50:26.369351 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:26.369870 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:flannel-056871 Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:26.369906 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:26.370118 1229086 main.go:141] libmachine: Docker is up and running!
	I0407 13:50:26.370137 1229086 main.go:141] libmachine: Reticulating splines...
	I0407 13:50:26.370147 1229086 client.go:171] duration metric: took 24.871547676s to LocalClient.Create
	I0407 13:50:26.370181 1229086 start.go:167] duration metric: took 24.871627127s to libmachine.API.Create "flannel-056871"
	I0407 13:50:26.370196 1229086 start.go:293] postStartSetup for "flannel-056871" (driver="kvm2")
	I0407 13:50:26.370210 1229086 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 13:50:26.370241 1229086 main.go:141] libmachine: (flannel-056871) Calling .DriverName
	I0407 13:50:26.370524 1229086 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 13:50:26.370554 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHHostname
	I0407 13:50:26.373284 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:26.373808 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:flannel-056871 Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:26.373840 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:26.374033 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHPort
	I0407 13:50:26.374678 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHKeyPath
	I0407 13:50:26.375055 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHUsername
	I0407 13:50:26.375384 1229086 sshutil.go:53] new ssh client: &{IP:192.168.61.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/flannel-056871/id_rsa Username:docker}
	I0407 13:50:26.460755 1229086 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 13:50:26.465475 1229086 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 13:50:26.465504 1229086 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-1162386/.minikube/addons for local assets ...
	I0407 13:50:26.465597 1229086 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-1162386/.minikube/files for local assets ...
	I0407 13:50:26.465696 1229086 filesync.go:149] local asset: /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem -> 11697162.pem in /etc/ssl/certs
	I0407 13:50:26.465843 1229086 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 13:50:26.476310 1229086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem --> /etc/ssl/certs/11697162.pem (1708 bytes)
	I0407 13:50:26.504930 1229086 start.go:296] duration metric: took 134.717884ms for postStartSetup
	I0407 13:50:26.505003 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetConfigRaw
	I0407 13:50:26.505638 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetIP
	I0407 13:50:26.508444 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:26.508870 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:flannel-056871 Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:26.508897 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:26.509247 1229086 profile.go:143] Saving config to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/config.json ...
	I0407 13:50:26.509497 1229086 start.go:128] duration metric: took 25.03370963s to createHost
	I0407 13:50:26.509528 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHHostname
	I0407 13:50:26.512597 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:26.513151 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:flannel-056871 Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:26.513192 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:26.513495 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHPort
	I0407 13:50:26.513772 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHKeyPath
	I0407 13:50:26.513968 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHKeyPath
	I0407 13:50:26.514138 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHUsername
	I0407 13:50:26.514375 1229086 main.go:141] libmachine: Using SSH client type: native
	I0407 13:50:26.514600 1229086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.247 22 <nil> <nil>}
	I0407 13:50:26.514610 1229086 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0407 13:50:26.622604 1229086 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744033826.573479899
	
	I0407 13:50:26.622636 1229086 fix.go:216] guest clock: 1744033826.573479899
	I0407 13:50:26.622647 1229086 fix.go:229] Guest: 2025-04-07 13:50:26.573479899 +0000 UTC Remote: 2025-04-07 13:50:26.509514288 +0000 UTC m=+25.440251336 (delta=63.965611ms)
	I0407 13:50:26.622676 1229086 fix.go:200] guest clock delta is within tolerance: 63.965611ms
	I0407 13:50:26.622684 1229086 start.go:83] releasing machines lock for "flannel-056871", held for 25.147016716s
	I0407 13:50:26.622719 1229086 main.go:141] libmachine: (flannel-056871) Calling .DriverName
	I0407 13:50:26.623017 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetIP
	I0407 13:50:26.626237 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:26.626627 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:flannel-056871 Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:26.626661 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:26.626774 1229086 main.go:141] libmachine: (flannel-056871) Calling .DriverName
	I0407 13:50:26.627398 1229086 main.go:141] libmachine: (flannel-056871) Calling .DriverName
	I0407 13:50:26.627579 1229086 main.go:141] libmachine: (flannel-056871) Calling .DriverName
	I0407 13:50:26.627661 1229086 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0407 13:50:26.627701 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHHostname
	I0407 13:50:26.627833 1229086 ssh_runner.go:195] Run: cat /version.json
	I0407 13:50:26.627863 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHHostname
	I0407 13:50:26.631003 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:26.631283 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:26.631492 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:flannel-056871 Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:26.631532 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:26.631652 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:flannel-056871 Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:26.631676 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:26.631741 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHPort
	I0407 13:50:26.631895 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHPort
	I0407 13:50:26.631976 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHKeyPath
	I0407 13:50:26.632068 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHKeyPath
	I0407 13:50:26.632135 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHUsername
	I0407 13:50:26.632154 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHUsername
	I0407 13:50:26.632262 1229086 sshutil.go:53] new ssh client: &{IP:192.168.61.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/flannel-056871/id_rsa Username:docker}
	I0407 13:50:26.632312 1229086 sshutil.go:53] new ssh client: &{IP:192.168.61.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/flannel-056871/id_rsa Username:docker}
	I0407 13:50:26.738942 1229086 ssh_runner.go:195] Run: systemctl --version
	I0407 13:50:26.745563 1229086 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0407 13:50:26.918903 1229086 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0407 13:50:26.924740 1229086 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 13:50:26.924829 1229086 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 13:50:26.943036 1229086 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0407 13:50:26.943074 1229086 start.go:495] detecting cgroup driver to use...
	I0407 13:50:26.943159 1229086 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 13:50:26.960762 1229086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 13:50:26.977092 1229086 docker.go:217] disabling cri-docker service (if available) ...
	I0407 13:50:26.977178 1229086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0407 13:50:26.992721 1229086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0407 13:50:27.009013 1229086 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0407 13:50:27.136407 1229086 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0407 13:50:27.281179 1229086 docker.go:233] disabling docker service ...
	I0407 13:50:27.281260 1229086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0407 13:50:27.295527 1229086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0407 13:50:27.309965 1229086 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0407 13:50:27.462386 1229086 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0407 13:50:27.615247 1229086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0407 13:50:27.631262 1229086 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 13:50:27.651829 1229086 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0407 13:50:27.651886 1229086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:50:27.664328 1229086 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0407 13:50:27.664406 1229086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:50:27.677109 1229086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:50:27.688301 1229086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:50:27.700257 1229086 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 13:50:27.712546 1229086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:50:27.724951 1229086 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:50:27.747202 1229086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:50:27.761073 1229086 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 13:50:27.773916 1229086 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0407 13:50:27.774002 1229086 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0407 13:50:27.791701 1229086 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 13:50:27.802543 1229086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:50:27.941266 1229086 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0407 13:50:28.051193 1229086 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0407 13:50:28.051286 1229086 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0407 13:50:28.056697 1229086 start.go:563] Will wait 60s for crictl version
	I0407 13:50:28.056862 1229086 ssh_runner.go:195] Run: which crictl
	I0407 13:50:28.061529 1229086 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 13:50:28.103833 1229086 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0407 13:50:28.103922 1229086 ssh_runner.go:195] Run: crio --version
	I0407 13:50:28.134968 1229086 ssh_runner.go:195] Run: crio --version
	I0407 13:50:28.175545 1229086 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0407 13:50:26.625302 1230577 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0407 13:50:26.625526 1230577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:50:26.625577 1230577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:50:26.646372 1230577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41575
	I0407 13:50:26.646944 1230577 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:50:26.647595 1230577 main.go:141] libmachine: Using API Version  1
	I0407 13:50:26.647621 1230577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:50:26.648019 1230577 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:50:26.648208 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetMachineName
	I0407 13:50:26.648372 1230577 main.go:141] libmachine: (bridge-056871) Calling .DriverName
	I0407 13:50:26.648516 1230577 start.go:159] libmachine.API.Create for "bridge-056871" (driver="kvm2")
	I0407 13:50:26.648543 1230577 client.go:168] LocalClient.Create starting
	I0407 13:50:26.648578 1230577 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem
	I0407 13:50:26.648614 1230577 main.go:141] libmachine: Decoding PEM data...
	I0407 13:50:26.648627 1230577 main.go:141] libmachine: Parsing certificate...
	I0407 13:50:26.648686 1230577 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem
	I0407 13:50:26.648708 1230577 main.go:141] libmachine: Decoding PEM data...
	I0407 13:50:26.648719 1230577 main.go:141] libmachine: Parsing certificate...
	I0407 13:50:26.648734 1230577 main.go:141] libmachine: Running pre-create checks...
	I0407 13:50:26.648744 1230577 main.go:141] libmachine: (bridge-056871) Calling .PreCreateCheck
	I0407 13:50:26.649147 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetConfigRaw
	I0407 13:50:26.649658 1230577 main.go:141] libmachine: Creating machine...
	I0407 13:50:26.649677 1230577 main.go:141] libmachine: (bridge-056871) Calling .Create
	I0407 13:50:26.649891 1230577 main.go:141] libmachine: (bridge-056871) creating KVM machine...
	I0407 13:50:26.649910 1230577 main.go:141] libmachine: (bridge-056871) creating network...
	I0407 13:50:26.651193 1230577 main.go:141] libmachine: (bridge-056871) DBG | found existing default KVM network
	I0407 13:50:26.652004 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:26.651784 1230645 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:87:f6:77} reservation:<nil>}
	I0407 13:50:26.653222 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:26.653125 1230645 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002098c0}
	I0407 13:50:26.653249 1230577 main.go:141] libmachine: (bridge-056871) DBG | created network xml: 
	I0407 13:50:26.653261 1230577 main.go:141] libmachine: (bridge-056871) DBG | <network>
	I0407 13:50:26.653269 1230577 main.go:141] libmachine: (bridge-056871) DBG |   <name>mk-bridge-056871</name>
	I0407 13:50:26.653277 1230577 main.go:141] libmachine: (bridge-056871) DBG |   <dns enable='no'/>
	I0407 13:50:26.653283 1230577 main.go:141] libmachine: (bridge-056871) DBG |   
	I0407 13:50:26.653293 1230577 main.go:141] libmachine: (bridge-056871) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0407 13:50:26.653304 1230577 main.go:141] libmachine: (bridge-056871) DBG |     <dhcp>
	I0407 13:50:26.653315 1230577 main.go:141] libmachine: (bridge-056871) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0407 13:50:26.653328 1230577 main.go:141] libmachine: (bridge-056871) DBG |     </dhcp>
	I0407 13:50:26.653337 1230577 main.go:141] libmachine: (bridge-056871) DBG |   </ip>
	I0407 13:50:26.653347 1230577 main.go:141] libmachine: (bridge-056871) DBG |   
	I0407 13:50:26.653355 1230577 main.go:141] libmachine: (bridge-056871) DBG | </network>
	I0407 13:50:26.653363 1230577 main.go:141] libmachine: (bridge-056871) DBG | 
	I0407 13:50:26.659288 1230577 main.go:141] libmachine: (bridge-056871) DBG | trying to create private KVM network mk-bridge-056871 192.168.50.0/24...
	I0407 13:50:26.754740 1230577 main.go:141] libmachine: (bridge-056871) DBG | private KVM network mk-bridge-056871 192.168.50.0/24 created
	I0407 13:50:26.754786 1230577 main.go:141] libmachine: (bridge-056871) setting up store path in /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/bridge-056871 ...
	I0407 13:50:26.754812 1230577 main.go:141] libmachine: (bridge-056871) building disk image from file:///home/jenkins/minikube-integration/20602-1162386/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0407 13:50:26.754885 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:26.754809 1230645 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20602-1162386/.minikube
	I0407 13:50:26.755137 1230577 main.go:141] libmachine: (bridge-056871) Downloading /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20602-1162386/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0407 13:50:27.080306 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:27.080159 1230645 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/bridge-056871/id_rsa...
	I0407 13:50:27.470492 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:27.470358 1230645 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/bridge-056871/bridge-056871.rawdisk...
	I0407 13:50:27.470527 1230577 main.go:141] libmachine: (bridge-056871) DBG | Writing magic tar header
	I0407 13:50:27.470543 1230577 main.go:141] libmachine: (bridge-056871) DBG | Writing SSH key tar header
	I0407 13:50:27.470614 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:27.470566 1230645 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/bridge-056871 ...
	I0407 13:50:27.470742 1230577 main.go:141] libmachine: (bridge-056871) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/bridge-056871
	I0407 13:50:27.470774 1230577 main.go:141] libmachine: (bridge-056871) setting executable bit set on /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/bridge-056871 (perms=drwx------)
	I0407 13:50:27.470787 1230577 main.go:141] libmachine: (bridge-056871) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines
	I0407 13:50:27.470818 1230577 main.go:141] libmachine: (bridge-056871) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20602-1162386/.minikube
	I0407 13:50:27.470830 1230577 main.go:141] libmachine: (bridge-056871) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20602-1162386
	I0407 13:50:27.470842 1230577 main.go:141] libmachine: (bridge-056871) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0407 13:50:27.470856 1230577 main.go:141] libmachine: (bridge-056871) setting executable bit set on /home/jenkins/minikube-integration/20602-1162386/.minikube/machines (perms=drwxr-xr-x)
	I0407 13:50:27.470863 1230577 main.go:141] libmachine: (bridge-056871) DBG | checking permissions on dir: /home/jenkins
	I0407 13:50:27.470876 1230577 main.go:141] libmachine: (bridge-056871) DBG | checking permissions on dir: /home
	I0407 13:50:27.470888 1230577 main.go:141] libmachine: (bridge-056871) DBG | skipping /home - not owner
	I0407 13:50:27.470897 1230577 main.go:141] libmachine: (bridge-056871) setting executable bit set on /home/jenkins/minikube-integration/20602-1162386/.minikube (perms=drwxr-xr-x)
	I0407 13:50:27.470910 1230577 main.go:141] libmachine: (bridge-056871) setting executable bit set on /home/jenkins/minikube-integration/20602-1162386 (perms=drwxrwxr-x)
	I0407 13:50:27.470922 1230577 main.go:141] libmachine: (bridge-056871) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0407 13:50:27.470934 1230577 main.go:141] libmachine: (bridge-056871) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0407 13:50:27.470943 1230577 main.go:141] libmachine: (bridge-056871) creating domain...
	I0407 13:50:27.472089 1230577 main.go:141] libmachine: (bridge-056871) define libvirt domain using xml: 
	I0407 13:50:27.472123 1230577 main.go:141] libmachine: (bridge-056871) <domain type='kvm'>
	I0407 13:50:27.472134 1230577 main.go:141] libmachine: (bridge-056871)   <name>bridge-056871</name>
	I0407 13:50:27.472141 1230577 main.go:141] libmachine: (bridge-056871)   <memory unit='MiB'>3072</memory>
	I0407 13:50:27.472151 1230577 main.go:141] libmachine: (bridge-056871)   <vcpu>2</vcpu>
	I0407 13:50:27.472158 1230577 main.go:141] libmachine: (bridge-056871)   <features>
	I0407 13:50:27.472165 1230577 main.go:141] libmachine: (bridge-056871)     <acpi/>
	I0407 13:50:27.472178 1230577 main.go:141] libmachine: (bridge-056871)     <apic/>
	I0407 13:50:27.472186 1230577 main.go:141] libmachine: (bridge-056871)     <pae/>
	I0407 13:50:27.472193 1230577 main.go:141] libmachine: (bridge-056871)     
	I0407 13:50:27.472204 1230577 main.go:141] libmachine: (bridge-056871)   </features>
	I0407 13:50:27.472215 1230577 main.go:141] libmachine: (bridge-056871)   <cpu mode='host-passthrough'>
	I0407 13:50:27.472251 1230577 main.go:141] libmachine: (bridge-056871)   
	I0407 13:50:27.472280 1230577 main.go:141] libmachine: (bridge-056871)   </cpu>
	I0407 13:50:27.472291 1230577 main.go:141] libmachine: (bridge-056871)   <os>
	I0407 13:50:27.472305 1230577 main.go:141] libmachine: (bridge-056871)     <type>hvm</type>
	I0407 13:50:27.472318 1230577 main.go:141] libmachine: (bridge-056871)     <boot dev='cdrom'/>
	I0407 13:50:27.472325 1230577 main.go:141] libmachine: (bridge-056871)     <boot dev='hd'/>
	I0407 13:50:27.472337 1230577 main.go:141] libmachine: (bridge-056871)     <bootmenu enable='no'/>
	I0407 13:50:27.472342 1230577 main.go:141] libmachine: (bridge-056871)   </os>
	I0407 13:50:27.472347 1230577 main.go:141] libmachine: (bridge-056871)   <devices>
	I0407 13:50:27.472355 1230577 main.go:141] libmachine: (bridge-056871)     <disk type='file' device='cdrom'>
	I0407 13:50:27.472379 1230577 main.go:141] libmachine: (bridge-056871)       <source file='/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/bridge-056871/boot2docker.iso'/>
	I0407 13:50:27.472398 1230577 main.go:141] libmachine: (bridge-056871)       <target dev='hdc' bus='scsi'/>
	I0407 13:50:27.472407 1230577 main.go:141] libmachine: (bridge-056871)       <readonly/>
	I0407 13:50:27.472417 1230577 main.go:141] libmachine: (bridge-056871)     </disk>
	I0407 13:50:27.472428 1230577 main.go:141] libmachine: (bridge-056871)     <disk type='file' device='disk'>
	I0407 13:50:27.472440 1230577 main.go:141] libmachine: (bridge-056871)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0407 13:50:27.472456 1230577 main.go:141] libmachine: (bridge-056871)       <source file='/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/bridge-056871/bridge-056871.rawdisk'/>
	I0407 13:50:27.472466 1230577 main.go:141] libmachine: (bridge-056871)       <target dev='hda' bus='virtio'/>
	I0407 13:50:27.472473 1230577 main.go:141] libmachine: (bridge-056871)     </disk>
	I0407 13:50:27.472481 1230577 main.go:141] libmachine: (bridge-056871)     <interface type='network'>
	I0407 13:50:27.472494 1230577 main.go:141] libmachine: (bridge-056871)       <source network='mk-bridge-056871'/>
	I0407 13:50:27.472513 1230577 main.go:141] libmachine: (bridge-056871)       <model type='virtio'/>
	I0407 13:50:27.472525 1230577 main.go:141] libmachine: (bridge-056871)     </interface>
	I0407 13:50:27.472535 1230577 main.go:141] libmachine: (bridge-056871)     <interface type='network'>
	I0407 13:50:27.472545 1230577 main.go:141] libmachine: (bridge-056871)       <source network='default'/>
	I0407 13:50:27.472554 1230577 main.go:141] libmachine: (bridge-056871)       <model type='virtio'/>
	I0407 13:50:27.472564 1230577 main.go:141] libmachine: (bridge-056871)     </interface>
	I0407 13:50:27.472569 1230577 main.go:141] libmachine: (bridge-056871)     <serial type='pty'>
	I0407 13:50:27.472580 1230577 main.go:141] libmachine: (bridge-056871)       <target port='0'/>
	I0407 13:50:27.472593 1230577 main.go:141] libmachine: (bridge-056871)     </serial>
	I0407 13:50:27.472621 1230577 main.go:141] libmachine: (bridge-056871)     <console type='pty'>
	I0407 13:50:27.472644 1230577 main.go:141] libmachine: (bridge-056871)       <target type='serial' port='0'/>
	I0407 13:50:27.472655 1230577 main.go:141] libmachine: (bridge-056871)     </console>
	I0407 13:50:27.472669 1230577 main.go:141] libmachine: (bridge-056871)     <rng model='virtio'>
	I0407 13:50:27.472683 1230577 main.go:141] libmachine: (bridge-056871)       <backend model='random'>/dev/random</backend>
	I0407 13:50:27.472689 1230577 main.go:141] libmachine: (bridge-056871)     </rng>
	I0407 13:50:27.472699 1230577 main.go:141] libmachine: (bridge-056871)     
	I0407 13:50:27.472705 1230577 main.go:141] libmachine: (bridge-056871)     
	I0407 13:50:27.472716 1230577 main.go:141] libmachine: (bridge-056871)   </devices>
	I0407 13:50:27.472724 1230577 main.go:141] libmachine: (bridge-056871) </domain>
	I0407 13:50:27.472738 1230577 main.go:141] libmachine: (bridge-056871) 
	I0407 13:50:27.477583 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:53:dd:9f in network default
	I0407 13:50:27.478323 1230577 main.go:141] libmachine: (bridge-056871) starting domain...
	I0407 13:50:27.478342 1230577 main.go:141] libmachine: (bridge-056871) ensuring networks are active...
	I0407 13:50:27.478352 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:27.479117 1230577 main.go:141] libmachine: (bridge-056871) Ensuring network default is active
	I0407 13:50:27.479451 1230577 main.go:141] libmachine: (bridge-056871) Ensuring network mk-bridge-056871 is active
	I0407 13:50:27.479970 1230577 main.go:141] libmachine: (bridge-056871) getting domain XML...
	I0407 13:50:27.480738 1230577 main.go:141] libmachine: (bridge-056871) creating domain...
	I0407 13:50:28.949656 1230577 main.go:141] libmachine: (bridge-056871) waiting for IP...
	I0407 13:50:28.950754 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:28.951466 1230577 main.go:141] libmachine: (bridge-056871) DBG | unable to find current IP address of domain bridge-056871 in network mk-bridge-056871
	I0407 13:50:28.951521 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:28.951469 1230645 retry.go:31] will retry after 215.247092ms: waiting for domain to come up
	I0407 13:50:29.168411 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:29.169205 1230577 main.go:141] libmachine: (bridge-056871) DBG | unable to find current IP address of domain bridge-056871 in network mk-bridge-056871
	I0407 13:50:29.169291 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:29.169186 1230645 retry.go:31] will retry after 290.693734ms: waiting for domain to come up
	I0407 13:50:28.176892 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetIP
	I0407 13:50:28.180543 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:28.181100 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:flannel-056871 Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:28.181127 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:28.181509 1229086 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0407 13:50:28.185853 1229086 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:50:28.200991 1229086 kubeadm.go:883] updating cluster {Name:flannel-056871 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:flannel-056871
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.61.247 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 13:50:28.201123 1229086 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 13:50:28.201182 1229086 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 13:50:28.238677 1229086 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0407 13:50:28.238758 1229086 ssh_runner.go:195] Run: which lz4
	I0407 13:50:28.243012 1229086 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0407 13:50:28.247954 1229086 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0407 13:50:28.248000 1229086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0407 13:50:29.817006 1229086 crio.go:462] duration metric: took 1.57406554s to copy over tarball
	I0407 13:50:29.817108 1229086 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0407 13:50:28.733101 1220973 pod_ready.go:103] pod "metrics-server-f79f97bbb-m78vh" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:30.734125 1220973 pod_ready.go:103] pod "metrics-server-f79f97bbb-m78vh" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:29.462243 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:29.463004 1230577 main.go:141] libmachine: (bridge-056871) DBG | unable to find current IP address of domain bridge-056871 in network mk-bridge-056871
	I0407 13:50:29.463042 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:29.462964 1230645 retry.go:31] will retry after 467.697129ms: waiting for domain to come up
	I0407 13:50:29.932873 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:29.933567 1230577 main.go:141] libmachine: (bridge-056871) DBG | unable to find current IP address of domain bridge-056871 in network mk-bridge-056871
	I0407 13:50:29.933596 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:29.933533 1230645 retry.go:31] will retry after 535.905567ms: waiting for domain to come up
	I0407 13:50:30.471706 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:30.472379 1230577 main.go:141] libmachine: (bridge-056871) DBG | unable to find current IP address of domain bridge-056871 in network mk-bridge-056871
	I0407 13:50:30.472407 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:30.472330 1230645 retry.go:31] will retry after 618.480423ms: waiting for domain to come up
	I0407 13:50:31.092807 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:31.093788 1230577 main.go:141] libmachine: (bridge-056871) DBG | unable to find current IP address of domain bridge-056871 in network mk-bridge-056871
	I0407 13:50:31.093829 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:31.093727 1230645 retry.go:31] will retry after 725.388807ms: waiting for domain to come up
	I0407 13:50:31.821291 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:31.821911 1230577 main.go:141] libmachine: (bridge-056871) DBG | unable to find current IP address of domain bridge-056871 in network mk-bridge-056871
	I0407 13:50:31.821942 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:31.821892 1230645 retry.go:31] will retry after 775.984409ms: waiting for domain to come up
	I0407 13:50:32.600220 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:32.600842 1230577 main.go:141] libmachine: (bridge-056871) DBG | unable to find current IP address of domain bridge-056871 in network mk-bridge-056871
	I0407 13:50:32.600873 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:32.600802 1230645 retry.go:31] will retry after 962.969903ms: waiting for domain to come up
	I0407 13:50:33.565304 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:33.565921 1230577 main.go:141] libmachine: (bridge-056871) DBG | unable to find current IP address of domain bridge-056871 in network mk-bridge-056871
	I0407 13:50:33.565973 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:33.565873 1230645 retry.go:31] will retry after 1.612514856s: waiting for domain to come up
	I0407 13:50:32.494439 1229086 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.677295945s)
	I0407 13:50:32.494478 1229086 crio.go:469] duration metric: took 2.677420142s to extract the tarball
	I0407 13:50:32.494489 1229086 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0407 13:50:32.535794 1229086 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 13:50:32.583475 1229086 crio.go:514] all images are preloaded for cri-o runtime.
	I0407 13:50:32.583510 1229086 cache_images.go:84] Images are preloaded, skipping loading
	I0407 13:50:32.583522 1229086 kubeadm.go:934] updating node { 192.168.61.247 8443 v1.32.2 crio true true} ...
	I0407 13:50:32.583650 1229086 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-056871 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.247
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:flannel-056871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0407 13:50:32.583742 1229086 ssh_runner.go:195] Run: crio config
	I0407 13:50:32.640040 1229086 cni.go:84] Creating CNI manager for "flannel"
	I0407 13:50:32.640079 1229086 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0407 13:50:32.640116 1229086 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.247 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-056871 NodeName:flannel-056871 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.247"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.247 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0407 13:50:32.640292 1229086 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.247
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-056871"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.247"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.247"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 13:50:32.640374 1229086 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0407 13:50:32.651316 1229086 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 13:50:32.651398 1229086 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 13:50:32.662004 1229086 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0407 13:50:32.681383 1229086 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 13:50:32.699464 1229086 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I0407 13:50:32.718073 1229086 ssh_runner.go:195] Run: grep 192.168.61.247	control-plane.minikube.internal$ /etc/hosts
	I0407 13:50:32.723813 1229086 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.247	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:50:32.742471 1229086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:50:32.882766 1229086 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:50:32.902076 1229086 certs.go:68] Setting up /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871 for IP: 192.168.61.247
	I0407 13:50:32.902105 1229086 certs.go:194] generating shared ca certs ...
	I0407 13:50:32.902124 1229086 certs.go:226] acquiring lock for ca certs: {Name:mk8e89191fca7f2111bdd08c345368f593b0d5d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:50:32.902321 1229086 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.key
	I0407 13:50:32.902375 1229086 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.key
	I0407 13:50:32.902390 1229086 certs.go:256] generating profile certs ...
	I0407 13:50:32.902467 1229086 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/client.key
	I0407 13:50:32.902487 1229086 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/client.crt with IP's: []
	I0407 13:50:33.569949 1229086 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/client.crt ...
	I0407 13:50:33.569987 1229086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/client.crt: {Name:mk4805221c36a2cca723bfa233dd774354e307a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:50:33.570209 1229086 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/client.key ...
	I0407 13:50:33.570231 1229086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/client.key: {Name:mk818a09a0e5f7f407383d67ffb02583991c1838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:50:33.570358 1229086 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/apiserver.key.a08cb125
	I0407 13:50:33.570378 1229086 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/apiserver.crt.a08cb125 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.247]
	I0407 13:50:33.921092 1229086 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/apiserver.crt.a08cb125 ...
	I0407 13:50:33.921130 1229086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/apiserver.crt.a08cb125: {Name:mk31775d71650842d3bfbf6897e603ba9bba8d7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:50:33.921346 1229086 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/apiserver.key.a08cb125 ...
	I0407 13:50:33.921366 1229086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/apiserver.key.a08cb125: {Name:mk0ccec011f5454d91ba41eaba4bfc3e7912f0e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:50:33.921468 1229086 certs.go:381] copying /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/apiserver.crt.a08cb125 -> /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/apiserver.crt
	I0407 13:50:33.921568 1229086 certs.go:385] copying /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/apiserver.key.a08cb125 -> /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/apiserver.key
	I0407 13:50:33.921658 1229086 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/proxy-client.key
	I0407 13:50:33.921682 1229086 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/proxy-client.crt with IP's: []
	I0407 13:50:34.039930 1229086 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/proxy-client.crt ...
	I0407 13:50:34.039969 1229086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/proxy-client.crt: {Name:mk7847287bc2ac1a4785e1fb0e3cdcf907896c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:50:34.040196 1229086 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/proxy-client.key ...
	I0407 13:50:34.040231 1229086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/proxy-client.key: {Name:mk31e01ebd2604ee7baf20e81e06b392f9ab1ffa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:50:34.040480 1229086 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716.pem (1338 bytes)
	W0407 13:50:34.040523 1229086 certs.go:480] ignoring /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716_empty.pem, impossibly tiny 0 bytes
	I0407 13:50:34.040530 1229086 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem (1675 bytes)
	I0407 13:50:34.040553 1229086 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem (1078 bytes)
	I0407 13:50:34.040576 1229086 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem (1123 bytes)
	I0407 13:50:34.040600 1229086 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem (1675 bytes)
	I0407 13:50:34.040641 1229086 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem (1708 bytes)
	I0407 13:50:34.041300 1229086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 13:50:34.086199 1229086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 13:50:34.117611 1229086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 13:50:34.144602 1229086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0407 13:50:34.172664 1229086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0407 13:50:34.201384 1229086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0407 13:50:34.230768 1229086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 13:50:34.257946 1229086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/flannel-056871/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0407 13:50:34.284148 1229086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 13:50:34.309784 1229086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716.pem --> /usr/share/ca-certificates/1169716.pem (1338 bytes)
	I0407 13:50:34.337736 1229086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem --> /usr/share/ca-certificates/11697162.pem (1708 bytes)
	I0407 13:50:34.368444 1229086 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 13:50:34.386775 1229086 ssh_runner.go:195] Run: openssl version
	I0407 13:50:34.392660 1229086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11697162.pem && ln -fs /usr/share/ca-certificates/11697162.pem /etc/ssl/certs/11697162.pem"
	I0407 13:50:34.404605 1229086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11697162.pem
	I0407 13:50:34.409966 1229086 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 12:22 /usr/share/ca-certificates/11697162.pem
	I0407 13:50:34.410059 1229086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11697162.pem
	I0407 13:50:34.417192 1229086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11697162.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 13:50:34.434247 1229086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 13:50:34.452498 1229086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:50:34.461061 1229086 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:14 /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:50:34.461145 1229086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:50:34.469236 1229086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 13:50:34.486757 1229086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1169716.pem && ln -fs /usr/share/ca-certificates/1169716.pem /etc/ssl/certs/1169716.pem"
	I0407 13:50:34.505841 1229086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1169716.pem
	I0407 13:50:34.512500 1229086 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 12:22 /usr/share/ca-certificates/1169716.pem
	I0407 13:50:34.512575 1229086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1169716.pem
	I0407 13:50:34.519355 1229086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1169716.pem /etc/ssl/certs/51391683.0"
	I0407 13:50:34.531934 1229086 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 13:50:34.537134 1229086 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0407 13:50:34.537220 1229086 kubeadm.go:392] StartCluster: {Name:flannel-056871 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:flannel-056871 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.61.247 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:50:34.537316 1229086 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0407 13:50:34.537378 1229086 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0407 13:50:34.577926 1229086 cri.go:89] found id: ""
	I0407 13:50:34.578004 1229086 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0407 13:50:34.588664 1229086 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0407 13:50:34.601841 1229086 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 13:50:34.614425 1229086 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 13:50:34.614447 1229086 kubeadm.go:157] found existing configuration files:
	
	I0407 13:50:34.614509 1229086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 13:50:34.626757 1229086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 13:50:34.626835 1229086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 13:50:34.636946 1229086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 13:50:34.646731 1229086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 13:50:34.646810 1229086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 13:50:34.656940 1229086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 13:50:34.666574 1229086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 13:50:34.666657 1229086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 13:50:34.676694 1229086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 13:50:34.687739 1229086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 13:50:34.687802 1229086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 13:50:34.698275 1229086 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0407 13:50:34.871264 1229086 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0407 13:50:33.233049 1220973 pod_ready.go:103] pod "metrics-server-f79f97bbb-m78vh" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:35.233735 1220973 pod_ready.go:103] pod "metrics-server-f79f97bbb-m78vh" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:37.233866 1220973 pod_ready.go:103] pod "metrics-server-f79f97bbb-m78vh" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:35.180805 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:35.181459 1230577 main.go:141] libmachine: (bridge-056871) DBG | unable to find current IP address of domain bridge-056871 in network mk-bridge-056871
	I0407 13:50:35.181495 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:35.181447 1230645 retry.go:31] will retry after 1.757890507s: waiting for domain to come up
	I0407 13:50:36.941039 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:36.942199 1230577 main.go:141] libmachine: (bridge-056871) DBG | unable to find current IP address of domain bridge-056871 in network mk-bridge-056871
	I0407 13:50:36.942252 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:36.942113 1230645 retry.go:31] will retry after 2.027504729s: waiting for domain to come up
	I0407 13:50:38.970898 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:38.971484 1230577 main.go:141] libmachine: (bridge-056871) DBG | unable to find current IP address of domain bridge-056871 in network mk-bridge-056871
	I0407 13:50:38.971524 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:38.971472 1230645 retry.go:31] will retry after 2.641457601s: waiting for domain to come up
	I0407 13:50:39.734200 1220973 pod_ready.go:103] pod "metrics-server-f79f97bbb-m78vh" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:42.232999 1220973 pod_ready.go:103] pod "metrics-server-f79f97bbb-m78vh" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:41.614467 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:41.615056 1230577 main.go:141] libmachine: (bridge-056871) DBG | unable to find current IP address of domain bridge-056871 in network mk-bridge-056871
	I0407 13:50:41.615081 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:41.615017 1230645 retry.go:31] will retry after 2.736353363s: waiting for domain to come up
	I0407 13:50:45.532979 1229086 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0407 13:50:45.533086 1229086 kubeadm.go:310] [preflight] Running pre-flight checks
	I0407 13:50:45.533212 1229086 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0407 13:50:45.533368 1229086 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0407 13:50:45.533481 1229086 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0407 13:50:45.533565 1229086 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 13:50:45.535389 1229086 out.go:235]   - Generating certificates and keys ...
	I0407 13:50:45.535475 1229086 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0407 13:50:45.535564 1229086 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0407 13:50:45.535678 1229086 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0407 13:50:45.535769 1229086 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0407 13:50:45.535860 1229086 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0407 13:50:45.535934 1229086 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0407 13:50:45.536016 1229086 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0407 13:50:45.536156 1229086 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-056871 localhost] and IPs [192.168.61.247 127.0.0.1 ::1]
	I0407 13:50:45.536233 1229086 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0407 13:50:45.536389 1229086 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-056871 localhost] and IPs [192.168.61.247 127.0.0.1 ::1]
	I0407 13:50:45.536478 1229086 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0407 13:50:45.536574 1229086 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0407 13:50:45.536646 1229086 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0407 13:50:45.536731 1229086 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 13:50:45.536804 1229086 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 13:50:45.536887 1229086 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0407 13:50:45.536948 1229086 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 13:50:45.537028 1229086 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 13:50:45.537076 1229086 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 13:50:45.537178 1229086 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 13:50:45.537281 1229086 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 13:50:45.539118 1229086 out.go:235]   - Booting up control plane ...
	I0407 13:50:45.539267 1229086 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 13:50:45.539349 1229086 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 13:50:45.539411 1229086 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 13:50:45.539507 1229086 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 13:50:45.539580 1229086 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 13:50:45.539635 1229086 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0407 13:50:45.539818 1229086 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0407 13:50:45.539949 1229086 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0407 13:50:45.540025 1229086 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.784059ms
	I0407 13:50:45.540087 1229086 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0407 13:50:45.540140 1229086 kubeadm.go:310] [api-check] The API server is healthy after 5.502559723s
	I0407 13:50:45.540233 1229086 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0407 13:50:45.540348 1229086 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0407 13:50:45.540399 1229086 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0407 13:50:45.540552 1229086 kubeadm.go:310] [mark-control-plane] Marking the node flannel-056871 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0407 13:50:45.540608 1229086 kubeadm.go:310] [bootstrap-token] Using token: t9k2ad.s7t8ejujlbhlgahm
	I0407 13:50:45.542178 1229086 out.go:235]   - Configuring RBAC rules ...
	I0407 13:50:45.542292 1229086 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0407 13:50:45.542396 1229086 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0407 13:50:45.542597 1229086 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0407 13:50:45.542789 1229086 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0407 13:50:45.542938 1229086 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0407 13:50:45.543010 1229086 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0407 13:50:45.543104 1229086 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0407 13:50:45.543142 1229086 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0407 13:50:45.543177 1229086 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0407 13:50:45.543183 1229086 kubeadm.go:310] 
	I0407 13:50:45.543232 1229086 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0407 13:50:45.543238 1229086 kubeadm.go:310] 
	I0407 13:50:45.543318 1229086 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0407 13:50:45.543324 1229086 kubeadm.go:310] 
	I0407 13:50:45.543362 1229086 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0407 13:50:45.543421 1229086 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0407 13:50:45.543465 1229086 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0407 13:50:45.543475 1229086 kubeadm.go:310] 
	I0407 13:50:45.543523 1229086 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0407 13:50:45.543535 1229086 kubeadm.go:310] 
	I0407 13:50:45.543581 1229086 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0407 13:50:45.543585 1229086 kubeadm.go:310] 
	I0407 13:50:45.543627 1229086 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0407 13:50:45.543696 1229086 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0407 13:50:45.543764 1229086 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0407 13:50:45.543770 1229086 kubeadm.go:310] 
	I0407 13:50:45.543839 1229086 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0407 13:50:45.543900 1229086 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0407 13:50:45.543906 1229086 kubeadm.go:310] 
	I0407 13:50:45.543972 1229086 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token t9k2ad.s7t8ejujlbhlgahm \
	I0407 13:50:45.544074 1229086 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:001387253bb6e222db2af12e9fcbe5a1c3ee2a6f53970e58b5a7d017a3fc6618 \
	I0407 13:50:45.544103 1229086 kubeadm.go:310] 	--control-plane 
	I0407 13:50:45.544108 1229086 kubeadm.go:310] 
	I0407 13:50:45.544185 1229086 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0407 13:50:45.544193 1229086 kubeadm.go:310] 
	I0407 13:50:45.544266 1229086 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token t9k2ad.s7t8ejujlbhlgahm \
	I0407 13:50:45.544372 1229086 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:001387253bb6e222db2af12e9fcbe5a1c3ee2a6f53970e58b5a7d017a3fc6618 
	I0407 13:50:45.544403 1229086 cni.go:84] Creating CNI manager for "flannel"
	I0407 13:50:45.546812 1229086 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0407 13:50:45.548070 1229086 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0407 13:50:45.554148 1229086 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0407 13:50:45.554177 1229086 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4348 bytes)
	I0407 13:50:45.573799 1229086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0407 13:50:46.037107 1229086 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0407 13:50:46.037331 1229086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:50:46.037333 1229086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-056871 minikube.k8s.io/updated_at=2025_04_07T13_50_46_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=33e6edc58d2014d70e908473920ef4ac8eae1e43 minikube.k8s.io/name=flannel-056871 minikube.k8s.io/primary=true
	I0407 13:50:46.064293 1229086 ops.go:34] apiserver oom_adj: -16
	I0407 13:50:44.733521 1220973 pod_ready.go:103] pod "metrics-server-f79f97bbb-m78vh" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:47.234709 1220973 pod_ready.go:103] pod "metrics-server-f79f97bbb-m78vh" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:44.353055 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:44.353642 1230577 main.go:141] libmachine: (bridge-056871) DBG | unable to find current IP address of domain bridge-056871 in network mk-bridge-056871
	I0407 13:50:44.353675 1230577 main.go:141] libmachine: (bridge-056871) DBG | I0407 13:50:44.353588 1230645 retry.go:31] will retry after 5.250336716s: waiting for domain to come up
	I0407 13:50:46.201152 1229086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:50:46.701353 1229086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:50:47.201423 1229086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:50:47.701873 1229086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:50:48.201616 1229086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:50:48.701363 1229086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:50:49.202051 1229086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:50:49.702005 1229086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:50:49.851095 1229086 kubeadm.go:1113] duration metric: took 3.81385601s to wait for elevateKubeSystemPrivileges
	I0407 13:50:49.851152 1229086 kubeadm.go:394] duration metric: took 15.313943562s to StartCluster
	I0407 13:50:49.851181 1229086 settings.go:142] acquiring lock: {Name:mk19c4dc5d7992642f3fe5ca0bdb3ea65af01b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:50:49.851301 1229086 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20602-1162386/kubeconfig
	I0407 13:50:49.852818 1229086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/kubeconfig: {Name:mk712863958f7dbf2601dd82dc9ca7bea42ef42f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:50:49.853195 1229086 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0407 13:50:49.853204 1229086 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.247 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0407 13:50:49.853362 1229086 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0407 13:50:49.853468 1229086 addons.go:69] Setting storage-provisioner=true in profile "flannel-056871"
	I0407 13:50:49.853489 1229086 addons.go:238] Setting addon storage-provisioner=true in "flannel-056871"
	I0407 13:50:49.853508 1229086 addons.go:69] Setting default-storageclass=true in profile "flannel-056871"
	I0407 13:50:49.853533 1229086 host.go:66] Checking if "flannel-056871" exists ...
	I0407 13:50:49.853564 1229086 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-056871"
	I0407 13:50:49.853590 1229086 config.go:182] Loaded profile config "flannel-056871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:50:49.854106 1229086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:50:49.854150 1229086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:50:49.854146 1229086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:50:49.854285 1229086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:50:49.854855 1229086 out.go:177] * Verifying Kubernetes components...
	I0407 13:50:49.856724 1229086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:50:49.875017 1229086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39479
	I0407 13:50:49.875166 1229086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42643
	I0407 13:50:49.875729 1229086 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:50:49.875743 1229086 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:50:49.876373 1229086 main.go:141] libmachine: Using API Version  1
	I0407 13:50:49.876375 1229086 main.go:141] libmachine: Using API Version  1
	I0407 13:50:49.876409 1229086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:50:49.876425 1229086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:50:49.876939 1229086 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:50:49.877001 1229086 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:50:49.877237 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetState
	I0407 13:50:49.877651 1229086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:50:49.877698 1229086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:50:49.882004 1229086 addons.go:238] Setting addon default-storageclass=true in "flannel-056871"
	I0407 13:50:49.882055 1229086 host.go:66] Checking if "flannel-056871" exists ...
	I0407 13:50:49.882496 1229086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:50:49.882542 1229086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:50:49.905456 1229086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34887
	I0407 13:50:49.906078 1229086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35353
	I0407 13:50:49.906328 1229086 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:50:49.906588 1229086 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:50:49.906958 1229086 main.go:141] libmachine: Using API Version  1
	I0407 13:50:49.906989 1229086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:50:49.907436 1229086 main.go:141] libmachine: Using API Version  1
	I0407 13:50:49.907453 1229086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:50:49.907514 1229086 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:50:49.907756 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetState
	I0407 13:50:49.907844 1229086 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:50:49.908507 1229086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:50:49.908572 1229086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:50:49.909997 1229086 main.go:141] libmachine: (flannel-056871) Calling .DriverName
	I0407 13:50:49.912348 1229086 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 13:50:49.915139 1229086 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 13:50:49.915178 1229086 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0407 13:50:49.915221 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHHostname
	I0407 13:50:49.920532 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:49.920862 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:flannel-056871 Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:49.920884 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:49.921348 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHPort
	I0407 13:50:49.921619 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHKeyPath
	I0407 13:50:49.921850 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHUsername
	I0407 13:50:49.922135 1229086 sshutil.go:53] new ssh client: &{IP:192.168.61.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/flannel-056871/id_rsa Username:docker}
	I0407 13:50:49.928730 1229086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42571
	I0407 13:50:49.929289 1229086 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:50:49.929795 1229086 main.go:141] libmachine: Using API Version  1
	I0407 13:50:49.929826 1229086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:50:49.930205 1229086 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:50:49.930435 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetState
	I0407 13:50:49.932734 1229086 main.go:141] libmachine: (flannel-056871) Calling .DriverName
	I0407 13:50:49.933052 1229086 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0407 13:50:49.933074 1229086 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0407 13:50:49.933099 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHHostname
	I0407 13:50:49.936791 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:49.937295 1229086 main.go:141] libmachine: (flannel-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:bb:50", ip: ""} in network mk-flannel-056871: {Iface:virbr3 ExpiryTime:2025-04-07 14:50:16 +0000 UTC Type:0 Mac:52:54:00:b2:bb:50 Iaid: IPaddr:192.168.61.247 Prefix:24 Hostname:flannel-056871 Clientid:01:52:54:00:b2:bb:50}
	I0407 13:50:49.937324 1229086 main.go:141] libmachine: (flannel-056871) DBG | domain flannel-056871 has defined IP address 192.168.61.247 and MAC address 52:54:00:b2:bb:50 in network mk-flannel-056871
	I0407 13:50:49.937512 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHPort
	I0407 13:50:49.937814 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHKeyPath
	I0407 13:50:49.938069 1229086 main.go:141] libmachine: (flannel-056871) Calling .GetSSHUsername
	I0407 13:50:49.938248 1229086 sshutil.go:53] new ssh client: &{IP:192.168.61.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/flannel-056871/id_rsa Username:docker}
	I0407 13:50:50.214467 1229086 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:50:50.214570 1229086 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0407 13:50:50.253276 1229086 node_ready.go:35] waiting up to 15m0s for node "flannel-056871" to be "Ready" ...
	I0407 13:50:50.335699 1229086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0407 13:50:50.467947 1229086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 13:50:50.731547 1229086 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0407 13:50:50.733104 1229086 main.go:141] libmachine: Making call to close driver server
	I0407 13:50:50.733137 1229086 main.go:141] libmachine: (flannel-056871) Calling .Close
	I0407 13:50:50.733577 1229086 main.go:141] libmachine: (flannel-056871) DBG | Closing plugin on server side
	I0407 13:50:50.733629 1229086 main.go:141] libmachine: Successfully made call to close driver server
	I0407 13:50:50.733642 1229086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 13:50:50.733652 1229086 main.go:141] libmachine: Making call to close driver server
	I0407 13:50:50.733664 1229086 main.go:141] libmachine: (flannel-056871) Calling .Close
	I0407 13:50:50.733988 1229086 main.go:141] libmachine: Successfully made call to close driver server
	I0407 13:50:50.734009 1229086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 13:50:50.734028 1229086 main.go:141] libmachine: (flannel-056871) DBG | Closing plugin on server side
	I0407 13:50:50.741077 1229086 main.go:141] libmachine: Making call to close driver server
	I0407 13:50:50.741122 1229086 main.go:141] libmachine: (flannel-056871) Calling .Close
	I0407 13:50:50.741511 1229086 main.go:141] libmachine: Successfully made call to close driver server
	I0407 13:50:50.741530 1229086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 13:50:50.741565 1229086 main.go:141] libmachine: (flannel-056871) DBG | Closing plugin on server side
	I0407 13:50:51.185842 1229086 main.go:141] libmachine: Making call to close driver server
	I0407 13:50:51.185879 1229086 main.go:141] libmachine: (flannel-056871) Calling .Close
	I0407 13:50:51.186216 1229086 main.go:141] libmachine: Successfully made call to close driver server
	I0407 13:50:51.186239 1229086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 13:50:51.186248 1229086 main.go:141] libmachine: Making call to close driver server
	I0407 13:50:51.186255 1229086 main.go:141] libmachine: (flannel-056871) Calling .Close
	I0407 13:50:51.186632 1229086 main.go:141] libmachine: (flannel-056871) DBG | Closing plugin on server side
	I0407 13:50:51.186634 1229086 main.go:141] libmachine: Successfully made call to close driver server
	I0407 13:50:51.186660 1229086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 13:50:51.188142 1229086 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0407 13:50:49.733551 1220973 pod_ready.go:103] pod "metrics-server-f79f97bbb-m78vh" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:52.231923 1220973 pod_ready.go:103] pod "metrics-server-f79f97bbb-m78vh" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:49.607206 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:49.608068 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has current primary IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:49.608100 1230577 main.go:141] libmachine: (bridge-056871) found domain IP: 192.168.50.60
	I0407 13:50:49.608113 1230577 main.go:141] libmachine: (bridge-056871) reserving static IP address...
	I0407 13:50:49.608627 1230577 main.go:141] libmachine: (bridge-056871) DBG | unable to find host DHCP lease matching {name: "bridge-056871", mac: "52:54:00:d9:a5:38", ip: "192.168.50.60"} in network mk-bridge-056871
	I0407 13:50:49.733945 1230577 main.go:141] libmachine: (bridge-056871) DBG | Getting to WaitForSSH function...
	I0407 13:50:49.733980 1230577 main.go:141] libmachine: (bridge-056871) reserved static IP address 192.168.50.60 for domain bridge-056871
	I0407 13:50:49.734003 1230577 main.go:141] libmachine: (bridge-056871) waiting for SSH...
	I0407 13:50:49.737179 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:49.737672 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d9:a5:38}
	I0407 13:50:49.737721 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:49.737907 1230577 main.go:141] libmachine: (bridge-056871) DBG | Using SSH client type: external
	I0407 13:50:49.737938 1230577 main.go:141] libmachine: (bridge-056871) DBG | Using SSH private key: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/bridge-056871/id_rsa (-rw-------)
	I0407 13:50:49.737990 1230577 main.go:141] libmachine: (bridge-056871) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.60 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/bridge-056871/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0407 13:50:49.738003 1230577 main.go:141] libmachine: (bridge-056871) DBG | About to run SSH command:
	I0407 13:50:49.738018 1230577 main.go:141] libmachine: (bridge-056871) DBG | exit 0
	I0407 13:50:49.871623 1230577 main.go:141] libmachine: (bridge-056871) DBG | SSH cmd err, output: <nil>: 
	I0407 13:50:49.872173 1230577 main.go:141] libmachine: (bridge-056871) KVM machine creation complete
	I0407 13:50:49.872561 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetConfigRaw
	I0407 13:50:49.873430 1230577 main.go:141] libmachine: (bridge-056871) Calling .DriverName
	I0407 13:50:49.873992 1230577 main.go:141] libmachine: (bridge-056871) Calling .DriverName
	I0407 13:50:49.874580 1230577 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0407 13:50:49.874607 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetState
	I0407 13:50:49.876968 1230577 main.go:141] libmachine: Detecting operating system of created instance...
	I0407 13:50:49.876990 1230577 main.go:141] libmachine: Waiting for SSH to be available...
	I0407 13:50:49.876998 1230577 main.go:141] libmachine: Getting to WaitForSSH function...
	I0407 13:50:49.877008 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHHostname
	I0407 13:50:49.881356 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:49.881976 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:bridge-056871 Clientid:01:52:54:00:d9:a5:38}
	I0407 13:50:49.882011 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:49.882276 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHPort
	I0407 13:50:49.882470 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHKeyPath
	I0407 13:50:49.882640 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHKeyPath
	I0407 13:50:49.882737 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHUsername
	I0407 13:50:49.882895 1230577 main.go:141] libmachine: Using SSH client type: native
	I0407 13:50:49.883159 1230577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0407 13:50:49.883175 1230577 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0407 13:50:50.013571 1230577 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 13:50:50.013597 1230577 main.go:141] libmachine: Detecting the provisioner...
	I0407 13:50:50.013606 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHHostname
	I0407 13:50:50.017589 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:50.018237 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:bridge-056871 Clientid:01:52:54:00:d9:a5:38}
	I0407 13:50:50.018288 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:50.018490 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHPort
	I0407 13:50:50.018885 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHKeyPath
	I0407 13:50:50.019181 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHKeyPath
	I0407 13:50:50.019392 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHUsername
	I0407 13:50:50.019621 1230577 main.go:141] libmachine: Using SSH client type: native
	I0407 13:50:50.020026 1230577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0407 13:50:50.020047 1230577 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0407 13:50:50.148346 1230577 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0407 13:50:50.148523 1230577 main.go:141] libmachine: found compatible host: buildroot
	I0407 13:50:50.148541 1230577 main.go:141] libmachine: Provisioning with buildroot...
	I0407 13:50:50.148552 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetMachineName
	I0407 13:50:50.148891 1230577 buildroot.go:166] provisioning hostname "bridge-056871"
	I0407 13:50:50.148924 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetMachineName
	I0407 13:50:50.149180 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHHostname
	I0407 13:50:50.153622 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:50.154168 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:bridge-056871 Clientid:01:52:54:00:d9:a5:38}
	I0407 13:50:50.154202 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:50.154423 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHPort
	I0407 13:50:50.154840 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHKeyPath
	I0407 13:50:50.155099 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHKeyPath
	I0407 13:50:50.155343 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHUsername
	I0407 13:50:50.155598 1230577 main.go:141] libmachine: Using SSH client type: native
	I0407 13:50:50.155917 1230577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0407 13:50:50.155939 1230577 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-056871 && echo "bridge-056871" | sudo tee /etc/hostname
	I0407 13:50:50.307962 1230577 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-056871
	
	I0407 13:50:50.308108 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHHostname
	I0407 13:50:50.312570 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:50.313202 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:bridge-056871 Clientid:01:52:54:00:d9:a5:38}
	I0407 13:50:50.313255 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:50.313769 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHPort
	I0407 13:50:50.314025 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHKeyPath
	I0407 13:50:50.314284 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHKeyPath
	I0407 13:50:50.314527 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHUsername
	I0407 13:50:50.314847 1230577 main.go:141] libmachine: Using SSH client type: native
	I0407 13:50:50.315206 1230577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0407 13:50:50.315237 1230577 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-056871' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-056871/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-056871' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 13:50:50.449379 1230577 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 13:50:50.449450 1230577 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20602-1162386/.minikube CaCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20602-1162386/.minikube}
	I0407 13:50:50.449489 1230577 buildroot.go:174] setting up certificates
	I0407 13:50:50.449508 1230577 provision.go:84] configureAuth start
	I0407 13:50:50.449524 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetMachineName
	I0407 13:50:50.450144 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetIP
	I0407 13:50:50.455450 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:50.456299 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:bridge-056871 Clientid:01:52:54:00:d9:a5:38}
	I0407 13:50:50.456349 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:50.456694 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHHostname
	I0407 13:50:50.460806 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:50.461439 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:bridge-056871 Clientid:01:52:54:00:d9:a5:38}
	I0407 13:50:50.461465 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:50.461845 1230577 provision.go:143] copyHostCerts
	I0407 13:50:50.461922 1230577 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem, removing ...
	I0407 13:50:50.461946 1230577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem
	I0407 13:50:50.462008 1230577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/cert.pem (1123 bytes)
	I0407 13:50:50.462133 1230577 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem, removing ...
	I0407 13:50:50.462146 1230577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem
	I0407 13:50:50.462169 1230577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/key.pem (1675 bytes)
	I0407 13:50:50.462266 1230577 exec_runner.go:144] found /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem, removing ...
	I0407 13:50:50.462279 1230577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem
	I0407 13:50:50.462310 1230577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.pem (1078 bytes)
	I0407 13:50:50.462399 1230577 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem org=jenkins.bridge-056871 san=[127.0.0.1 192.168.50.60 bridge-056871 localhost minikube]
	I0407 13:50:50.593520 1230577 provision.go:177] copyRemoteCerts
	I0407 13:50:50.593592 1230577 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 13:50:50.593620 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHHostname
	I0407 13:50:50.597459 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:50.598006 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:bridge-056871 Clientid:01:52:54:00:d9:a5:38}
	I0407 13:50:50.598046 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:50.598295 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHPort
	I0407 13:50:50.598543 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHKeyPath
	I0407 13:50:50.598774 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHUsername
	I0407 13:50:50.598944 1230577 sshutil.go:53] new ssh client: &{IP:192.168.50.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/bridge-056871/id_rsa Username:docker}
	I0407 13:50:50.689418 1230577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 13:50:50.722043 1230577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0407 13:50:50.757374 1230577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0407 13:50:50.789497 1230577 provision.go:87] duration metric: took 339.97034ms to configureAuth
	I0407 13:50:50.789540 1230577 buildroot.go:189] setting minikube options for container-runtime
	I0407 13:50:50.789789 1230577 config.go:182] Loaded profile config "bridge-056871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:50:50.789890 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHHostname
	I0407 13:50:50.793663 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:50.794168 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:bridge-056871 Clientid:01:52:54:00:d9:a5:38}
	I0407 13:50:50.794207 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:50.794531 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHPort
	I0407 13:50:50.794759 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHKeyPath
	I0407 13:50:50.794949 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHKeyPath
	I0407 13:50:50.795114 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHUsername
	I0407 13:50:50.795319 1230577 main.go:141] libmachine: Using SSH client type: native
	I0407 13:50:50.795557 1230577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0407 13:50:50.795574 1230577 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0407 13:50:51.058392 1230577 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0407 13:50:51.058431 1230577 main.go:141] libmachine: Checking connection to Docker...
	I0407 13:50:51.058443 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetURL
	I0407 13:50:51.060057 1230577 main.go:141] libmachine: (bridge-056871) DBG | using libvirt version 6000000
	I0407 13:50:51.063055 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:51.063423 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:bridge-056871 Clientid:01:52:54:00:d9:a5:38}
	I0407 13:50:51.063463 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:51.063623 1230577 main.go:141] libmachine: Docker is up and running!
	I0407 13:50:51.063643 1230577 main.go:141] libmachine: Reticulating splines...
	I0407 13:50:51.063652 1230577 client.go:171] duration metric: took 24.415097468s to LocalClient.Create
	I0407 13:50:51.063680 1230577 start.go:167] duration metric: took 24.415165779s to libmachine.API.Create "bridge-056871"
	I0407 13:50:51.063694 1230577 start.go:293] postStartSetup for "bridge-056871" (driver="kvm2")
	I0407 13:50:51.063705 1230577 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 13:50:51.063725 1230577 main.go:141] libmachine: (bridge-056871) Calling .DriverName
	I0407 13:50:51.064040 1230577 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 13:50:51.064068 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHHostname
	I0407 13:50:51.066899 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:51.067209 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:bridge-056871 Clientid:01:52:54:00:d9:a5:38}
	I0407 13:50:51.067244 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:51.067387 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHPort
	I0407 13:50:51.067586 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHKeyPath
	I0407 13:50:51.067750 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHUsername
	I0407 13:50:51.067884 1230577 sshutil.go:53] new ssh client: &{IP:192.168.50.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/bridge-056871/id_rsa Username:docker}
	I0407 13:50:51.163321 1230577 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 13:50:51.168918 1230577 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 13:50:51.168960 1230577 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-1162386/.minikube/addons for local assets ...
	I0407 13:50:51.169049 1230577 filesync.go:126] Scanning /home/jenkins/minikube-integration/20602-1162386/.minikube/files for local assets ...
	I0407 13:50:51.169147 1230577 filesync.go:149] local asset: /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem -> 11697162.pem in /etc/ssl/certs
	I0407 13:50:51.169246 1230577 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 13:50:51.182011 1230577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem --> /etc/ssl/certs/11697162.pem (1708 bytes)
	I0407 13:50:51.215173 1230577 start.go:296] duration metric: took 151.464024ms for postStartSetup
	I0407 13:50:51.215285 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetConfigRaw
	I0407 13:50:51.216295 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetIP
	I0407 13:50:51.219915 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:51.220457 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:bridge-056871 Clientid:01:52:54:00:d9:a5:38}
	I0407 13:50:51.220492 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:51.220853 1230577 profile.go:143] Saving config to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/config.json ...
	I0407 13:50:51.221086 1230577 start.go:128] duration metric: took 24.597995408s to createHost
	I0407 13:50:51.221115 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHHostname
	I0407 13:50:51.224196 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:51.224690 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:bridge-056871 Clientid:01:52:54:00:d9:a5:38}
	I0407 13:50:51.224724 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:51.224909 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHPort
	I0407 13:50:51.225153 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHKeyPath
	I0407 13:50:51.225359 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHKeyPath
	I0407 13:50:51.225593 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHUsername
	I0407 13:50:51.225819 1230577 main.go:141] libmachine: Using SSH client type: native
	I0407 13:50:51.226057 1230577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.60 22 <nil> <nil>}
	I0407 13:50:51.226070 1230577 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0407 13:50:51.355321 1230577 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744033851.327638706
	
	I0407 13:50:51.355352 1230577 fix.go:216] guest clock: 1744033851.327638706
	I0407 13:50:51.355363 1230577 fix.go:229] Guest: 2025-04-07 13:50:51.327638706 +0000 UTC Remote: 2025-04-07 13:50:51.221101199 +0000 UTC m=+26.995589901 (delta=106.537507ms)
	I0407 13:50:51.355411 1230577 fix.go:200] guest clock delta is within tolerance: 106.537507ms
	I0407 13:50:51.355419 1230577 start.go:83] releasing machines lock for "bridge-056871", held for 24.732580363s
	I0407 13:50:51.355448 1230577 main.go:141] libmachine: (bridge-056871) Calling .DriverName
	I0407 13:50:51.355762 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetIP
	I0407 13:50:51.358759 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:51.359218 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:bridge-056871 Clientid:01:52:54:00:d9:a5:38}
	I0407 13:50:51.359247 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:51.359537 1230577 main.go:141] libmachine: (bridge-056871) Calling .DriverName
	I0407 13:50:51.360286 1230577 main.go:141] libmachine: (bridge-056871) Calling .DriverName
	I0407 13:50:51.360561 1230577 main.go:141] libmachine: (bridge-056871) Calling .DriverName
	I0407 13:50:51.360655 1230577 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0407 13:50:51.360707 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHHostname
	I0407 13:50:51.360847 1230577 ssh_runner.go:195] Run: cat /version.json
	I0407 13:50:51.360878 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHHostname
	I0407 13:50:51.363825 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:51.364425 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:bridge-056871 Clientid:01:52:54:00:d9:a5:38}
	I0407 13:50:51.364462 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:51.364487 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:51.364681 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHPort
	I0407 13:50:51.364990 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHKeyPath
	I0407 13:50:51.365112 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:bridge-056871 Clientid:01:52:54:00:d9:a5:38}
	I0407 13:50:51.365141 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:51.365196 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHUsername
	I0407 13:50:51.365461 1230577 sshutil.go:53] new ssh client: &{IP:192.168.50.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/bridge-056871/id_rsa Username:docker}
	I0407 13:50:51.365537 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHPort
	I0407 13:50:51.365764 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHKeyPath
	I0407 13:50:51.365992 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHUsername
	I0407 13:50:51.366203 1230577 sshutil.go:53] new ssh client: &{IP:192.168.50.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/bridge-056871/id_rsa Username:docker}
	I0407 13:50:51.470790 1230577 ssh_runner.go:195] Run: systemctl --version
	I0407 13:50:51.477081 1230577 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0407 13:50:51.646446 1230577 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0407 13:50:51.652665 1230577 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 13:50:51.652749 1230577 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 13:50:51.670656 1230577 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0407 13:50:51.670685 1230577 start.go:495] detecting cgroup driver to use...
	I0407 13:50:51.670770 1230577 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 13:50:51.690714 1230577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 13:50:51.708136 1230577 docker.go:217] disabling cri-docker service (if available) ...
	I0407 13:50:51.708236 1230577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0407 13:50:51.724771 1230577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0407 13:50:51.742167 1230577 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0407 13:50:51.888167 1230577 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0407 13:50:52.046050 1230577 docker.go:233] disabling docker service ...
	I0407 13:50:52.046143 1230577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0407 13:50:52.064251 1230577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0407 13:50:52.080907 1230577 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0407 13:50:52.237674 1230577 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0407 13:50:52.367460 1230577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0407 13:50:52.381926 1230577 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 13:50:52.401291 1230577 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0407 13:50:52.401354 1230577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:50:52.412491 1230577 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0407 13:50:52.412572 1230577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:50:52.425226 1230577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:50:52.436161 1230577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:50:52.448684 1230577 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 13:50:52.461652 1230577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:50:52.474488 1230577 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:50:52.493283 1230577 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:50:52.504400 1230577 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 13:50:52.514824 1230577 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0407 13:50:52.514917 1230577 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0407 13:50:52.529376 1230577 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 13:50:52.541468 1230577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:50:52.678297 1230577 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0407 13:50:52.792609 1230577 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0407 13:50:52.792702 1230577 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0407 13:50:52.798134 1230577 start.go:563] Will wait 60s for crictl version
	I0407 13:50:52.798199 1230577 ssh_runner.go:195] Run: which crictl
	I0407 13:50:52.803082 1230577 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 13:50:52.859038 1230577 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0407 13:50:52.859147 1230577 ssh_runner.go:195] Run: crio --version
	I0407 13:50:52.891576 1230577 ssh_runner.go:195] Run: crio --version
	I0407 13:50:52.923542 1230577 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0407 13:50:52.925081 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetIP
	I0407 13:50:52.928505 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:52.928967 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:bridge-056871 Clientid:01:52:54:00:d9:a5:38}
	I0407 13:50:52.929009 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:50:52.929315 1230577 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0407 13:50:52.935087 1230577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:50:52.949529 1230577 kubeadm.go:883] updating cluster {Name:bridge-056871 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-056871 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.60 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 13:50:52.949701 1230577 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 13:50:52.949838 1230577 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 13:50:52.987568 1230577 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0407 13:50:52.987653 1230577 ssh_runner.go:195] Run: which lz4
	I0407 13:50:52.992548 1230577 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0407 13:50:52.998863 1230577 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0407 13:50:52.998913 1230577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0407 13:50:51.189863 1229086 addons.go:514] duration metric: took 1.336491082s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0407 13:50:51.238034 1229086 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-056871" context rescaled to 1 replicas
	I0407 13:50:52.257086 1229086 node_ready.go:53] node "flannel-056871" has status "Ready":"False"
	I0407 13:50:54.756619 1229086 node_ready.go:53] node "flannel-056871" has status "Ready":"False"
	I0407 13:50:54.234617 1220973 pod_ready.go:103] pod "metrics-server-f79f97bbb-m78vh" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:56.732728 1220973 pod_ready.go:103] pod "metrics-server-f79f97bbb-m78vh" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:54.515892 1230577 crio.go:462] duration metric: took 1.523394215s to copy over tarball
	I0407 13:50:54.515995 1230577 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0407 13:50:57.213356 1230577 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.697323774s)
	I0407 13:50:57.213402 1230577 crio.go:469] duration metric: took 2.697477479s to extract the tarball
	I0407 13:50:57.213413 1230577 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0407 13:50:57.255268 1230577 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 13:50:57.318494 1230577 crio.go:514] all images are preloaded for cri-o runtime.
	I0407 13:50:57.318534 1230577 cache_images.go:84] Images are preloaded, skipping loading
	I0407 13:50:57.318547 1230577 kubeadm.go:934] updating node { 192.168.50.60 8443 v1.32.2 crio true true} ...
	I0407 13:50:57.318677 1230577 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-056871 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:bridge-056871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0407 13:50:57.318774 1230577 ssh_runner.go:195] Run: crio config
	I0407 13:50:57.380053 1230577 cni.go:84] Creating CNI manager for "bridge"
	I0407 13:50:57.380079 1230577 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0407 13:50:57.380105 1230577 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.60 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-056871 NodeName:bridge-056871 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.60"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.60 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0407 13:50:57.380278 1230577 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.60
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-056871"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.60"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.60"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 13:50:57.380354 1230577 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0407 13:50:57.392866 1230577 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 13:50:57.392960 1230577 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 13:50:57.406687 1230577 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0407 13:50:57.427880 1230577 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 13:50:57.448616 1230577 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0407 13:50:57.470040 1230577 ssh_runner.go:195] Run: grep 192.168.50.60	control-plane.minikube.internal$ /etc/hosts
	I0407 13:50:57.475302 1230577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.60	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:50:57.491749 1230577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:50:57.639795 1230577 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:50:57.658241 1230577 certs.go:68] Setting up /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871 for IP: 192.168.50.60
	I0407 13:50:57.658290 1230577 certs.go:194] generating shared ca certs ...
	I0407 13:50:57.658317 1230577 certs.go:226] acquiring lock for ca certs: {Name:mk8e89191fca7f2111bdd08c345368f593b0d5d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:50:57.658561 1230577 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.key
	I0407 13:50:57.658619 1230577 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.key
	I0407 13:50:57.658633 1230577 certs.go:256] generating profile certs ...
	I0407 13:50:57.658706 1230577 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/client.key
	I0407 13:50:57.658742 1230577 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/client.crt with IP's: []
	I0407 13:50:57.974616 1230577 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/client.crt ...
	I0407 13:50:57.974653 1230577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/client.crt: {Name:mkc867212f90b2762394f4051a0f0af7353f610d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:50:57.974835 1230577 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/client.key ...
	I0407 13:50:57.974848 1230577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/client.key: {Name:mk05aafef2f5921529a0b513feffd0dc25ca3d50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:50:57.974962 1230577 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/apiserver.key.d9851c72
	I0407 13:50:57.974997 1230577 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/apiserver.crt.d9851c72 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.60]
	I0407 13:50:58.209297 1230577 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/apiserver.crt.d9851c72 ...
	I0407 13:50:58.209334 1230577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/apiserver.crt.d9851c72: {Name:mk98eb2013c8df0dacc23f994053809c81d58a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:50:58.209554 1230577 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/apiserver.key.d9851c72 ...
	I0407 13:50:58.209574 1230577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/apiserver.key.d9851c72: {Name:mkcf9ecf16e30518e911274a1e12ea04551f6078 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:50:58.209682 1230577 certs.go:381] copying /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/apiserver.crt.d9851c72 -> /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/apiserver.crt
	I0407 13:50:58.209815 1230577 certs.go:385] copying /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/apiserver.key.d9851c72 -> /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/apiserver.key
	I0407 13:50:58.209874 1230577 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/proxy-client.key
	I0407 13:50:58.209891 1230577 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/proxy-client.crt with IP's: []
	I0407 13:50:58.795229 1230577 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/proxy-client.crt ...
	I0407 13:50:58.795270 1230577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/proxy-client.crt: {Name:mke5bf8aa5bc8a94a5bfc7724d5b4299874dc779 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:50:58.795464 1230577 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/proxy-client.key ...
	I0407 13:50:58.795479 1230577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/proxy-client.key: {Name:mkcbdcd825378ab7ffd2c1e3905866b5d0bc479d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:50:58.795656 1230577 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716.pem (1338 bytes)
	W0407 13:50:58.795697 1230577 certs.go:480] ignoring /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716_empty.pem, impossibly tiny 0 bytes
	I0407 13:50:58.795707 1230577 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca-key.pem (1675 bytes)
	I0407 13:50:58.795729 1230577 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/ca.pem (1078 bytes)
	I0407 13:50:58.795754 1230577 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/cert.pem (1123 bytes)
	I0407 13:50:58.795777 1230577 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/key.pem (1675 bytes)
	I0407 13:50:58.795814 1230577 certs.go:484] found cert: /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem (1708 bytes)
	I0407 13:50:58.796406 1230577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 13:50:58.827453 1230577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 13:50:58.860891 1230577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 13:50:58.891713 1230577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0407 13:50:58.920508 1230577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0407 13:50:58.949761 1230577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0407 13:50:58.979704 1230577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 13:50:59.011376 1230577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/bridge-056871/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0407 13:50:59.041385 1230577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/ssl/certs/11697162.pem --> /usr/share/ca-certificates/11697162.pem (1708 bytes)
	I0407 13:50:59.071111 1230577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 13:50:59.099290 1230577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20602-1162386/.minikube/certs/1169716.pem --> /usr/share/ca-certificates/1169716.pem (1338 bytes)
	I0407 13:50:59.129519 1230577 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 13:50:59.150197 1230577 ssh_runner.go:195] Run: openssl version
	I0407 13:50:59.156990 1230577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 13:50:59.169023 1230577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:50:59.175362 1230577 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:14 /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:50:59.175452 1230577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:50:59.182838 1230577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 13:50:59.196179 1230577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1169716.pem && ln -fs /usr/share/ca-certificates/1169716.pem /etc/ssl/certs/1169716.pem"
	I0407 13:50:59.220374 1230577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1169716.pem
	I0407 13:50:59.225777 1230577 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 12:22 /usr/share/ca-certificates/1169716.pem
	I0407 13:50:59.225851 1230577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1169716.pem
	I0407 13:50:59.232822 1230577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1169716.pem /etc/ssl/certs/51391683.0"
	I0407 13:50:59.251539 1230577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11697162.pem && ln -fs /usr/share/ca-certificates/11697162.pem /etc/ssl/certs/11697162.pem"
	I0407 13:50:59.280935 1230577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11697162.pem
	I0407 13:50:59.289842 1230577 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 12:22 /usr/share/ca-certificates/11697162.pem
	I0407 13:50:59.289940 1230577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11697162.pem
	I0407 13:50:59.299101 1230577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11697162.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 13:50:59.315775 1230577 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 13:50:59.321662 1230577 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0407 13:50:59.321777 1230577 kubeadm.go:392] StartCluster: {Name:bridge-056871 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-056871 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.60 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:50:59.321930 1230577 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0407 13:50:59.322014 1230577 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0407 13:50:59.362423 1230577 cri.go:89] found id: ""
	I0407 13:50:59.362507 1230577 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0407 13:50:59.373224 1230577 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0407 13:50:59.386723 1230577 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 13:50:59.399124 1230577 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 13:50:59.399168 1230577 kubeadm.go:157] found existing configuration files:
	
	I0407 13:50:59.399226 1230577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 13:50:59.411049 1230577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 13:50:59.411127 1230577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 13:50:59.426032 1230577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 13:50:59.438641 1230577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 13:50:59.438728 1230577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 13:50:59.451089 1230577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 13:50:59.463857 1230577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 13:50:59.463974 1230577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 13:50:59.479205 1230577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 13:50:59.493922 1230577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 13:50:59.494003 1230577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 13:50:59.505870 1230577 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0407 13:50:59.564385 1230577 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0407 13:50:59.564482 1230577 kubeadm.go:310] [preflight] Running pre-flight checks
	I0407 13:50:59.695317 1230577 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0407 13:50:59.695454 1230577 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0407 13:50:59.695578 1230577 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0407 13:50:59.706980 1230577 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 13:50:56.757337 1229086 node_ready.go:53] node "flannel-056871" has status "Ready":"False"
	I0407 13:50:59.265295 1229086 node_ready.go:49] node "flannel-056871" has status "Ready":"True"
	I0407 13:50:59.265338 1229086 node_ready.go:38] duration metric: took 9.012025989s for node "flannel-056871" to be "Ready" ...
	I0407 13:50:59.265352 1229086 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 13:50:59.614573 1229086 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-wtbtr" in "kube-system" namespace to be "Ready" ...
	I0407 13:50:59.708816 1230577 out.go:235]   - Generating certificates and keys ...
	I0407 13:50:59.708936 1230577 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0407 13:50:59.709062 1230577 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0407 13:50:59.754958 1230577 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0407 13:51:00.057162 1230577 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0407 13:51:00.247122 1230577 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0407 13:51:00.307908 1230577 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0407 13:51:00.548568 1230577 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0407 13:51:00.548728 1230577 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-056871 localhost] and IPs [192.168.50.60 127.0.0.1 ::1]
	I0407 13:51:00.647617 1230577 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0407 13:51:00.647862 1230577 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-056871 localhost] and IPs [192.168.50.60 127.0.0.1 ::1]
	I0407 13:51:00.792377 1230577 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0407 13:51:00.871086 1230577 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0407 13:51:00.924184 1230577 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0407 13:51:00.924560 1230577 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 13:51:01.350299 1230577 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 13:51:01.615993 1230577 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0407 13:51:01.929649 1230577 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 13:51:02.113360 1230577 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 13:51:02.546006 1230577 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 13:51:02.546752 1230577 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 13:51:02.552370 1230577 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 13:50:58.732838 1220973 pod_ready.go:103] pod "metrics-server-f79f97bbb-m78vh" in "kube-system" namespace has status "Ready":"False"
	I0407 13:50:59.232858 1220973 pod_ready.go:82] duration metric: took 4m0.006754984s for pod "metrics-server-f79f97bbb-m78vh" in "kube-system" namespace to be "Ready" ...
	E0407 13:50:59.232890 1220973 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0407 13:50:59.232901 1220973 pod_ready.go:39] duration metric: took 4m5.548332556s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 13:50:59.232938 1220973 api_server.go:52] waiting for apiserver process to appear ...
	I0407 13:50:59.232999 1220973 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:50:59.233061 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:50:59.299596 1220973 cri.go:89] found id: "6cbdc1dbe232daca30cf9798cbe45bc4e4b5484f2e30d667c64446b5286a7548"
	I0407 13:50:59.299625 1220973 cri.go:89] found id: ""
	I0407 13:50:59.299636 1220973 logs.go:282] 1 containers: [6cbdc1dbe232daca30cf9798cbe45bc4e4b5484f2e30d667c64446b5286a7548]
	I0407 13:50:59.299702 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:50:59.305111 1220973 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:50:59.305225 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:50:59.352747 1220973 cri.go:89] found id: "a812790b510c3b709c41019dad30590a5e59858ee6d5580754e4a036c2976bba"
	I0407 13:50:59.352779 1220973 cri.go:89] found id: ""
	I0407 13:50:59.352789 1220973 logs.go:282] 1 containers: [a812790b510c3b709c41019dad30590a5e59858ee6d5580754e4a036c2976bba]
	I0407 13:50:59.352846 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:50:59.357342 1220973 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:50:59.357450 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:50:59.403512 1220973 cri.go:89] found id: "4810859fa0f7ef2d1589ac95411a7ac1507611346d1aaa753426dcc88a4e856e"
	I0407 13:50:59.403544 1220973 cri.go:89] found id: ""
	I0407 13:50:59.403556 1220973 logs.go:282] 1 containers: [4810859fa0f7ef2d1589ac95411a7ac1507611346d1aaa753426dcc88a4e856e]
	I0407 13:50:59.403632 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:50:59.408194 1220973 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:50:59.408287 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:50:59.456348 1220973 cri.go:89] found id: "fb859882ec6a9fd778bf3ed031011efa021fa44fc8fe605ae66cde13666f4e31"
	I0407 13:50:59.456380 1220973 cri.go:89] found id: ""
	I0407 13:50:59.456390 1220973 logs.go:282] 1 containers: [fb859882ec6a9fd778bf3ed031011efa021fa44fc8fe605ae66cde13666f4e31]
	I0407 13:50:59.456459 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:50:59.461952 1220973 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:50:59.462054 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:50:59.521364 1220973 cri.go:89] found id: "32e3e8695b0d8a7b728aab2ac20aacda522f31d2370af79a49fc2b2ad83d48ce"
	I0407 13:50:59.521415 1220973 cri.go:89] found id: ""
	I0407 13:50:59.521424 1220973 logs.go:282] 1 containers: [32e3e8695b0d8a7b728aab2ac20aacda522f31d2370af79a49fc2b2ad83d48ce]
	I0407 13:50:59.521505 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:50:59.527616 1220973 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:50:59.527742 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:50:59.578098 1220973 cri.go:89] found id: "73783df615d51c4dbbdc8b3d7491f98b0db8a0fe47164cbfb337f6d530ef0479"
	I0407 13:50:59.578131 1220973 cri.go:89] found id: ""
	I0407 13:50:59.578141 1220973 logs.go:282] 1 containers: [73783df615d51c4dbbdc8b3d7491f98b0db8a0fe47164cbfb337f6d530ef0479]
	I0407 13:50:59.578211 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:50:59.585662 1220973 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:50:59.585783 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:50:59.641063 1220973 cri.go:89] found id: ""
	I0407 13:50:59.641098 1220973 logs.go:282] 0 containers: []
	W0407 13:50:59.641109 1220973 logs.go:284] No container was found matching "kindnet"
	I0407 13:50:59.641118 1220973 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:50:59.641207 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:50:59.689067 1220973 cri.go:89] found id: "76e9f9bee671aee2fee8265d665ba5af5294dbaad7e0029b08a664f346f9b48a"
	I0407 13:50:59.689112 1220973 cri.go:89] found id: ""
	I0407 13:50:59.689125 1220973 logs.go:282] 1 containers: [76e9f9bee671aee2fee8265d665ba5af5294dbaad7e0029b08a664f346f9b48a]
	I0407 13:50:59.689210 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:50:59.695252 1220973 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0407 13:50:59.695343 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0407 13:50:59.740221 1220973 cri.go:89] found id: "10ed6f8775b27e44b030bfbb966fbdec66d0417583ee1437a63f77d3ad67fac9"
	I0407 13:50:59.740252 1220973 cri.go:89] found id: "1ed6295bdc8301b8f8525a335c839a14438ad10707fce521a1414f1950e8458f"
	I0407 13:50:59.740256 1220973 cri.go:89] found id: ""
	I0407 13:50:59.740270 1220973 logs.go:282] 2 containers: [10ed6f8775b27e44b030bfbb966fbdec66d0417583ee1437a63f77d3ad67fac9 1ed6295bdc8301b8f8525a335c839a14438ad10707fce521a1414f1950e8458f]
	I0407 13:50:59.740348 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:50:59.745389 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:50:59.750517 1220973 logs.go:123] Gathering logs for kube-apiserver [6cbdc1dbe232daca30cf9798cbe45bc4e4b5484f2e30d667c64446b5286a7548] ...
	I0407 13:50:59.750557 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6cbdc1dbe232daca30cf9798cbe45bc4e4b5484f2e30d667c64446b5286a7548"
	I0407 13:50:59.810125 1220973 logs.go:123] Gathering logs for coredns [4810859fa0f7ef2d1589ac95411a7ac1507611346d1aaa753426dcc88a4e856e] ...
	I0407 13:50:59.810182 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4810859fa0f7ef2d1589ac95411a7ac1507611346d1aaa753426dcc88a4e856e"
	I0407 13:50:59.860690 1220973 logs.go:123] Gathering logs for kube-proxy [32e3e8695b0d8a7b728aab2ac20aacda522f31d2370af79a49fc2b2ad83d48ce] ...
	I0407 13:50:59.860741 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32e3e8695b0d8a7b728aab2ac20aacda522f31d2370af79a49fc2b2ad83d48ce"
	I0407 13:50:59.904254 1220973 logs.go:123] Gathering logs for kube-controller-manager [73783df615d51c4dbbdc8b3d7491f98b0db8a0fe47164cbfb337f6d530ef0479] ...
	I0407 13:50:59.904291 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73783df615d51c4dbbdc8b3d7491f98b0db8a0fe47164cbfb337f6d530ef0479"
	I0407 13:50:59.981119 1220973 logs.go:123] Gathering logs for storage-provisioner [1ed6295bdc8301b8f8525a335c839a14438ad10707fce521a1414f1950e8458f] ...
	I0407 13:50:59.981181 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ed6295bdc8301b8f8525a335c839a14438ad10707fce521a1414f1950e8458f"
	I0407 13:51:00.033545 1220973 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:51:00.033589 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:51:00.691879 1220973 logs.go:123] Gathering logs for dmesg ...
	I0407 13:51:00.691971 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:51:00.709206 1220973 logs.go:123] Gathering logs for etcd [a812790b510c3b709c41019dad30590a5e59858ee6d5580754e4a036c2976bba] ...
	I0407 13:51:00.709256 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a812790b510c3b709c41019dad30590a5e59858ee6d5580754e4a036c2976bba"
	I0407 13:51:00.771594 1220973 logs.go:123] Gathering logs for kube-scheduler [fb859882ec6a9fd778bf3ed031011efa021fa44fc8fe605ae66cde13666f4e31] ...
	I0407 13:51:00.771666 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb859882ec6a9fd778bf3ed031011efa021fa44fc8fe605ae66cde13666f4e31"
	I0407 13:51:00.821357 1220973 logs.go:123] Gathering logs for kubernetes-dashboard [76e9f9bee671aee2fee8265d665ba5af5294dbaad7e0029b08a664f346f9b48a] ...
	I0407 13:51:00.821404 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76e9f9bee671aee2fee8265d665ba5af5294dbaad7e0029b08a664f346f9b48a"
	I0407 13:51:00.865005 1220973 logs.go:123] Gathering logs for storage-provisioner [10ed6f8775b27e44b030bfbb966fbdec66d0417583ee1437a63f77d3ad67fac9] ...
	I0407 13:51:00.865053 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10ed6f8775b27e44b030bfbb966fbdec66d0417583ee1437a63f77d3ad67fac9"
	I0407 13:51:00.904894 1220973 logs.go:123] Gathering logs for container status ...
	I0407 13:51:00.904934 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:51:00.968005 1220973 logs.go:123] Gathering logs for kubelet ...
	I0407 13:51:00.968072 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:51:01.067832 1220973 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:51:01.067896 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0407 13:51:02.554852 1230577 out.go:235]   - Booting up control plane ...
	I0407 13:51:02.555003 1230577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 13:51:02.555092 1230577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 13:51:02.555231 1230577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 13:51:02.575773 1230577 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 13:51:02.585901 1230577 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 13:51:02.586018 1230577 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0407 13:51:02.742270 1230577 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0407 13:51:02.742452 1230577 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0407 13:51:03.243655 1230577 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.087964ms
	I0407 13:51:03.243741 1230577 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0407 13:51:01.622566 1229086 pod_ready.go:103] pod "coredns-668d6bf9bc-wtbtr" in "kube-system" namespace has status "Ready":"False"
	I0407 13:51:04.124574 1229086 pod_ready.go:103] pod "coredns-668d6bf9bc-wtbtr" in "kube-system" namespace has status "Ready":"False"
	I0407 13:51:03.750152 1220973 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:51:03.767770 1220973 api_server.go:72] duration metric: took 4m17.405802625s to wait for apiserver process to appear ...
	I0407 13:51:03.767806 1220973 api_server.go:88] waiting for apiserver healthz status ...
	I0407 13:51:03.767861 1220973 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:51:03.767930 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:51:03.818498 1220973 cri.go:89] found id: "6cbdc1dbe232daca30cf9798cbe45bc4e4b5484f2e30d667c64446b5286a7548"
	I0407 13:51:03.818536 1220973 cri.go:89] found id: ""
	I0407 13:51:03.818548 1220973 logs.go:282] 1 containers: [6cbdc1dbe232daca30cf9798cbe45bc4e4b5484f2e30d667c64446b5286a7548]
	I0407 13:51:03.818627 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:51:03.823650 1220973 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:51:03.823760 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:51:03.875522 1220973 cri.go:89] found id: "a812790b510c3b709c41019dad30590a5e59858ee6d5580754e4a036c2976bba"
	I0407 13:51:03.875682 1220973 cri.go:89] found id: ""
	I0407 13:51:03.875708 1220973 logs.go:282] 1 containers: [a812790b510c3b709c41019dad30590a5e59858ee6d5580754e4a036c2976bba]
	I0407 13:51:03.875823 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:51:03.881759 1220973 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:51:03.881872 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:51:03.934057 1220973 cri.go:89] found id: "4810859fa0f7ef2d1589ac95411a7ac1507611346d1aaa753426dcc88a4e856e"
	I0407 13:51:03.934089 1220973 cri.go:89] found id: ""
	I0407 13:51:03.934100 1220973 logs.go:282] 1 containers: [4810859fa0f7ef2d1589ac95411a7ac1507611346d1aaa753426dcc88a4e856e]
	I0407 13:51:03.934167 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:51:03.941166 1220973 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:51:03.941286 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:51:04.007594 1220973 cri.go:89] found id: "fb859882ec6a9fd778bf3ed031011efa021fa44fc8fe605ae66cde13666f4e31"
	I0407 13:51:04.007635 1220973 cri.go:89] found id: ""
	I0407 13:51:04.007647 1220973 logs.go:282] 1 containers: [fb859882ec6a9fd778bf3ed031011efa021fa44fc8fe605ae66cde13666f4e31]
	I0407 13:51:04.007730 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:51:04.013908 1220973 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:51:04.014034 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:51:04.081983 1220973 cri.go:89] found id: "32e3e8695b0d8a7b728aab2ac20aacda522f31d2370af79a49fc2b2ad83d48ce"
	I0407 13:51:04.082043 1220973 cri.go:89] found id: ""
	I0407 13:51:04.082062 1220973 logs.go:282] 1 containers: [32e3e8695b0d8a7b728aab2ac20aacda522f31d2370af79a49fc2b2ad83d48ce]
	I0407 13:51:04.082162 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:51:04.088227 1220973 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:51:04.088493 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:51:04.134694 1220973 cri.go:89] found id: "73783df615d51c4dbbdc8b3d7491f98b0db8a0fe47164cbfb337f6d530ef0479"
	I0407 13:51:04.134730 1220973 cri.go:89] found id: ""
	I0407 13:51:04.134744 1220973 logs.go:282] 1 containers: [73783df615d51c4dbbdc8b3d7491f98b0db8a0fe47164cbfb337f6d530ef0479]
	I0407 13:51:04.134818 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:51:04.140372 1220973 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:51:04.140465 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:51:04.183295 1220973 cri.go:89] found id: ""
	I0407 13:51:04.183336 1220973 logs.go:282] 0 containers: []
	W0407 13:51:04.183347 1220973 logs.go:284] No container was found matching "kindnet"
	I0407 13:51:04.183355 1220973 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:51:04.183426 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:51:04.231005 1220973 cri.go:89] found id: "76e9f9bee671aee2fee8265d665ba5af5294dbaad7e0029b08a664f346f9b48a"
	I0407 13:51:04.231046 1220973 cri.go:89] found id: ""
	I0407 13:51:04.231058 1220973 logs.go:282] 1 containers: [76e9f9bee671aee2fee8265d665ba5af5294dbaad7e0029b08a664f346f9b48a]
	I0407 13:51:04.231145 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:51:04.237741 1220973 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0407 13:51:04.237843 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0407 13:51:04.288156 1220973 cri.go:89] found id: "10ed6f8775b27e44b030bfbb966fbdec66d0417583ee1437a63f77d3ad67fac9"
	I0407 13:51:04.288193 1220973 cri.go:89] found id: "1ed6295bdc8301b8f8525a335c839a14438ad10707fce521a1414f1950e8458f"
	I0407 13:51:04.288198 1220973 cri.go:89] found id: ""
	I0407 13:51:04.288209 1220973 logs.go:282] 2 containers: [10ed6f8775b27e44b030bfbb966fbdec66d0417583ee1437a63f77d3ad67fac9 1ed6295bdc8301b8f8525a335c839a14438ad10707fce521a1414f1950e8458f]
	I0407 13:51:04.288293 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:51:04.293482 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:51:04.299121 1220973 logs.go:123] Gathering logs for kube-scheduler [fb859882ec6a9fd778bf3ed031011efa021fa44fc8fe605ae66cde13666f4e31] ...
	I0407 13:51:04.299170 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb859882ec6a9fd778bf3ed031011efa021fa44fc8fe605ae66cde13666f4e31"
	I0407 13:51:04.341860 1220973 logs.go:123] Gathering logs for kube-proxy [32e3e8695b0d8a7b728aab2ac20aacda522f31d2370af79a49fc2b2ad83d48ce] ...
	I0407 13:51:04.341899 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32e3e8695b0d8a7b728aab2ac20aacda522f31d2370af79a49fc2b2ad83d48ce"
	I0407 13:51:04.399464 1220973 logs.go:123] Gathering logs for storage-provisioner [10ed6f8775b27e44b030bfbb966fbdec66d0417583ee1437a63f77d3ad67fac9] ...
	I0407 13:51:04.399522 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10ed6f8775b27e44b030bfbb966fbdec66d0417583ee1437a63f77d3ad67fac9"
	I0407 13:51:04.450476 1220973 logs.go:123] Gathering logs for storage-provisioner [1ed6295bdc8301b8f8525a335c839a14438ad10707fce521a1414f1950e8458f] ...
	I0407 13:51:04.450532 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ed6295bdc8301b8f8525a335c839a14438ad10707fce521a1414f1950e8458f"
	I0407 13:51:04.496635 1220973 logs.go:123] Gathering logs for kubelet ...
	I0407 13:51:04.496675 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:51:04.599935 1220973 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:51:04.599980 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0407 13:51:04.736164 1220973 logs.go:123] Gathering logs for kube-controller-manager [73783df615d51c4dbbdc8b3d7491f98b0db8a0fe47164cbfb337f6d530ef0479] ...
	I0407 13:51:04.736213 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73783df615d51c4dbbdc8b3d7491f98b0db8a0fe47164cbfb337f6d530ef0479"
	I0407 13:51:04.801457 1220973 logs.go:123] Gathering logs for kubernetes-dashboard [76e9f9bee671aee2fee8265d665ba5af5294dbaad7e0029b08a664f346f9b48a] ...
	I0407 13:51:04.801522 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76e9f9bee671aee2fee8265d665ba5af5294dbaad7e0029b08a664f346f9b48a"
	I0407 13:51:04.851783 1220973 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:51:04.851841 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:51:05.386797 1220973 logs.go:123] Gathering logs for container status ...
	I0407 13:51:05.386851 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:51:05.446579 1220973 logs.go:123] Gathering logs for dmesg ...
	I0407 13:51:05.446640 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:51:05.468506 1220973 logs.go:123] Gathering logs for kube-apiserver [6cbdc1dbe232daca30cf9798cbe45bc4e4b5484f2e30d667c64446b5286a7548] ...
	I0407 13:51:05.468564 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6cbdc1dbe232daca30cf9798cbe45bc4e4b5484f2e30d667c64446b5286a7548"
	I0407 13:51:05.529064 1220973 logs.go:123] Gathering logs for etcd [a812790b510c3b709c41019dad30590a5e59858ee6d5580754e4a036c2976bba] ...
	I0407 13:51:05.529121 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a812790b510c3b709c41019dad30590a5e59858ee6d5580754e4a036c2976bba"
	I0407 13:51:05.589325 1220973 logs.go:123] Gathering logs for coredns [4810859fa0f7ef2d1589ac95411a7ac1507611346d1aaa753426dcc88a4e856e] ...
	I0407 13:51:05.589379 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4810859fa0f7ef2d1589ac95411a7ac1507611346d1aaa753426dcc88a4e856e"
	I0407 13:51:08.744835 1230577 kubeadm.go:310] [api-check] The API server is healthy after 5.502339468s
	I0407 13:51:08.765064 1230577 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0407 13:51:08.786774 1230577 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0407 13:51:08.845055 1230577 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0407 13:51:08.845300 1230577 kubeadm.go:310] [mark-control-plane] Marking the node bridge-056871 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0407 13:51:08.870492 1230577 kubeadm.go:310] [bootstrap-token] Using token: q192q8.5uqppii6wweemeid
	I0407 13:51:08.872629 1230577 out.go:235]   - Configuring RBAC rules ...
	I0407 13:51:08.872809 1230577 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0407 13:51:08.889428 1230577 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0407 13:51:08.906087 1230577 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0407 13:51:08.920396 1230577 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0407 13:51:08.932368 1230577 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0407 13:51:08.940930 1230577 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0407 13:51:09.157342 1230577 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0407 13:51:09.608864 1230577 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0407 13:51:10.152069 1230577 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0407 13:51:10.152099 1230577 kubeadm.go:310] 
	I0407 13:51:10.152202 1230577 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0407 13:51:10.152213 1230577 kubeadm.go:310] 
	I0407 13:51:10.152334 1230577 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0407 13:51:10.152343 1230577 kubeadm.go:310] 
	I0407 13:51:10.152379 1230577 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0407 13:51:10.152469 1230577 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0407 13:51:10.152528 1230577 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0407 13:51:10.152535 1230577 kubeadm.go:310] 
	I0407 13:51:10.152581 1230577 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0407 13:51:10.152587 1230577 kubeadm.go:310] 
	I0407 13:51:10.152640 1230577 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0407 13:51:10.152647 1230577 kubeadm.go:310] 
	I0407 13:51:10.152691 1230577 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0407 13:51:10.152775 1230577 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0407 13:51:10.152867 1230577 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0407 13:51:10.152877 1230577 kubeadm.go:310] 
	I0407 13:51:10.152972 1230577 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0407 13:51:10.153231 1230577 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0407 13:51:10.153316 1230577 kubeadm.go:310] 
	I0407 13:51:10.153526 1230577 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token q192q8.5uqppii6wweemeid \
	I0407 13:51:10.153665 1230577 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:001387253bb6e222db2af12e9fcbe5a1c3ee2a6f53970e58b5a7d017a3fc6618 \
	I0407 13:51:10.153696 1230577 kubeadm.go:310] 	--control-plane 
	I0407 13:51:10.153715 1230577 kubeadm.go:310] 
	I0407 13:51:10.153851 1230577 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0407 13:51:10.153900 1230577 kubeadm.go:310] 
	I0407 13:51:10.154039 1230577 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token q192q8.5uqppii6wweemeid \
	I0407 13:51:10.154310 1230577 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:001387253bb6e222db2af12e9fcbe5a1c3ee2a6f53970e58b5a7d017a3fc6618 
	I0407 13:51:10.154481 1230577 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0407 13:51:10.154507 1230577 cni.go:84] Creating CNI manager for "bridge"
	I0407 13:51:10.156948 1230577 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0407 13:51:06.623742 1229086 pod_ready.go:103] pod "coredns-668d6bf9bc-wtbtr" in "kube-system" namespace has status "Ready":"False"
	I0407 13:51:09.124641 1229086 pod_ready.go:103] pod "coredns-668d6bf9bc-wtbtr" in "kube-system" namespace has status "Ready":"False"
	I0407 13:51:08.147478 1220973 api_server.go:253] Checking apiserver healthz at https://192.168.72.39:8444/healthz ...
	I0407 13:51:08.156708 1220973 api_server.go:279] https://192.168.72.39:8444/healthz returned 200:
	ok
	I0407 13:51:08.158021 1220973 api_server.go:141] control plane version: v1.32.2
	I0407 13:51:08.158052 1220973 api_server.go:131] duration metric: took 4.390237602s to wait for apiserver health ...
	I0407 13:51:08.158064 1220973 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 13:51:08.158093 1220973 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 13:51:08.158144 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 13:51:08.201978 1220973 cri.go:89] found id: "6cbdc1dbe232daca30cf9798cbe45bc4e4b5484f2e30d667c64446b5286a7548"
	I0407 13:51:08.202009 1220973 cri.go:89] found id: ""
	I0407 13:51:08.202021 1220973 logs.go:282] 1 containers: [6cbdc1dbe232daca30cf9798cbe45bc4e4b5484f2e30d667c64446b5286a7548]
	I0407 13:51:08.202088 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:51:08.206567 1220973 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 13:51:08.206658 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 13:51:08.266741 1220973 cri.go:89] found id: "a812790b510c3b709c41019dad30590a5e59858ee6d5580754e4a036c2976bba"
	I0407 13:51:08.266771 1220973 cri.go:89] found id: ""
	I0407 13:51:08.266782 1220973 logs.go:282] 1 containers: [a812790b510c3b709c41019dad30590a5e59858ee6d5580754e4a036c2976bba]
	I0407 13:51:08.266853 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:51:08.271249 1220973 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 13:51:08.271321 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 13:51:08.310236 1220973 cri.go:89] found id: "4810859fa0f7ef2d1589ac95411a7ac1507611346d1aaa753426dcc88a4e856e"
	I0407 13:51:08.310270 1220973 cri.go:89] found id: ""
	I0407 13:51:08.310279 1220973 logs.go:282] 1 containers: [4810859fa0f7ef2d1589ac95411a7ac1507611346d1aaa753426dcc88a4e856e]
	I0407 13:51:08.310331 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:51:08.314760 1220973 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 13:51:08.314857 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 13:51:08.358924 1220973 cri.go:89] found id: "fb859882ec6a9fd778bf3ed031011efa021fa44fc8fe605ae66cde13666f4e31"
	I0407 13:51:08.358959 1220973 cri.go:89] found id: ""
	I0407 13:51:08.358970 1220973 logs.go:282] 1 containers: [fb859882ec6a9fd778bf3ed031011efa021fa44fc8fe605ae66cde13666f4e31]
	I0407 13:51:08.359049 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:51:08.363412 1220973 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 13:51:08.363502 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 13:51:08.401615 1220973 cri.go:89] found id: "32e3e8695b0d8a7b728aab2ac20aacda522f31d2370af79a49fc2b2ad83d48ce"
	I0407 13:51:08.401643 1220973 cri.go:89] found id: ""
	I0407 13:51:08.401653 1220973 logs.go:282] 1 containers: [32e3e8695b0d8a7b728aab2ac20aacda522f31d2370af79a49fc2b2ad83d48ce]
	I0407 13:51:08.401733 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:51:08.407568 1220973 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 13:51:08.407681 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 13:51:08.450987 1220973 cri.go:89] found id: "73783df615d51c4dbbdc8b3d7491f98b0db8a0fe47164cbfb337f6d530ef0479"
	I0407 13:51:08.451037 1220973 cri.go:89] found id: ""
	I0407 13:51:08.451072 1220973 logs.go:282] 1 containers: [73783df615d51c4dbbdc8b3d7491f98b0db8a0fe47164cbfb337f6d530ef0479]
	I0407 13:51:08.451144 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:51:08.455919 1220973 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 13:51:08.456033 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 13:51:08.494960 1220973 cri.go:89] found id: ""
	I0407 13:51:08.495003 1220973 logs.go:282] 0 containers: []
	W0407 13:51:08.495017 1220973 logs.go:284] No container was found matching "kindnet"
	I0407 13:51:08.495025 1220973 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0407 13:51:08.495106 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0407 13:51:08.543463 1220973 cri.go:89] found id: "10ed6f8775b27e44b030bfbb966fbdec66d0417583ee1437a63f77d3ad67fac9"
	I0407 13:51:08.543488 1220973 cri.go:89] found id: "1ed6295bdc8301b8f8525a335c839a14438ad10707fce521a1414f1950e8458f"
	I0407 13:51:08.543493 1220973 cri.go:89] found id: ""
	I0407 13:51:08.543519 1220973 logs.go:282] 2 containers: [10ed6f8775b27e44b030bfbb966fbdec66d0417583ee1437a63f77d3ad67fac9 1ed6295bdc8301b8f8525a335c839a14438ad10707fce521a1414f1950e8458f]
	I0407 13:51:08.543572 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:51:08.548346 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:51:08.552343 1220973 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 13:51:08.552415 1220973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 13:51:08.592306 1220973 cri.go:89] found id: "76e9f9bee671aee2fee8265d665ba5af5294dbaad7e0029b08a664f346f9b48a"
	I0407 13:51:08.592342 1220973 cri.go:89] found id: ""
	I0407 13:51:08.592354 1220973 logs.go:282] 1 containers: [76e9f9bee671aee2fee8265d665ba5af5294dbaad7e0029b08a664f346f9b48a]
	I0407 13:51:08.592427 1220973 ssh_runner.go:195] Run: which crictl
	I0407 13:51:08.596797 1220973 logs.go:123] Gathering logs for dmesg ...
	I0407 13:51:08.596825 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 13:51:08.611785 1220973 logs.go:123] Gathering logs for describe nodes ...
	I0407 13:51:08.611816 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0407 13:51:08.722127 1220973 logs.go:123] Gathering logs for etcd [a812790b510c3b709c41019dad30590a5e59858ee6d5580754e4a036c2976bba] ...
	I0407 13:51:08.722186 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a812790b510c3b709c41019dad30590a5e59858ee6d5580754e4a036c2976bba"
	I0407 13:51:08.784857 1220973 logs.go:123] Gathering logs for coredns [4810859fa0f7ef2d1589ac95411a7ac1507611346d1aaa753426dcc88a4e856e] ...
	I0407 13:51:08.784904 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4810859fa0f7ef2d1589ac95411a7ac1507611346d1aaa753426dcc88a4e856e"
	I0407 13:51:08.823331 1220973 logs.go:123] Gathering logs for kube-scheduler [fb859882ec6a9fd778bf3ed031011efa021fa44fc8fe605ae66cde13666f4e31] ...
	I0407 13:51:08.823361 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb859882ec6a9fd778bf3ed031011efa021fa44fc8fe605ae66cde13666f4e31"
	I0407 13:51:08.861421 1220973 logs.go:123] Gathering logs for kube-proxy [32e3e8695b0d8a7b728aab2ac20aacda522f31d2370af79a49fc2b2ad83d48ce] ...
	I0407 13:51:08.861457 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32e3e8695b0d8a7b728aab2ac20aacda522f31d2370af79a49fc2b2ad83d48ce"
	I0407 13:51:08.923758 1220973 logs.go:123] Gathering logs for storage-provisioner [10ed6f8775b27e44b030bfbb966fbdec66d0417583ee1437a63f77d3ad67fac9] ...
	I0407 13:51:08.923805 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10ed6f8775b27e44b030bfbb966fbdec66d0417583ee1437a63f77d3ad67fac9"
	I0407 13:51:08.978649 1220973 logs.go:123] Gathering logs for kubernetes-dashboard [76e9f9bee671aee2fee8265d665ba5af5294dbaad7e0029b08a664f346f9b48a] ...
	I0407 13:51:08.978702 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76e9f9bee671aee2fee8265d665ba5af5294dbaad7e0029b08a664f346f9b48a"
	I0407 13:51:09.028285 1220973 logs.go:123] Gathering logs for kubelet ...
	I0407 13:51:09.028327 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 13:51:09.130281 1220973 logs.go:123] Gathering logs for kube-apiserver [6cbdc1dbe232daca30cf9798cbe45bc4e4b5484f2e30d667c64446b5286a7548] ...
	I0407 13:51:09.130329 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6cbdc1dbe232daca30cf9798cbe45bc4e4b5484f2e30d667c64446b5286a7548"
	I0407 13:51:09.203999 1220973 logs.go:123] Gathering logs for kube-controller-manager [73783df615d51c4dbbdc8b3d7491f98b0db8a0fe47164cbfb337f6d530ef0479] ...
	I0407 13:51:09.204060 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73783df615d51c4dbbdc8b3d7491f98b0db8a0fe47164cbfb337f6d530ef0479"
	I0407 13:51:09.280859 1220973 logs.go:123] Gathering logs for storage-provisioner [1ed6295bdc8301b8f8525a335c839a14438ad10707fce521a1414f1950e8458f] ...
	I0407 13:51:09.280917 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ed6295bdc8301b8f8525a335c839a14438ad10707fce521a1414f1950e8458f"
	I0407 13:51:09.329964 1220973 logs.go:123] Gathering logs for CRI-O ...
	I0407 13:51:09.329999 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 13:51:09.777851 1220973 logs.go:123] Gathering logs for container status ...
	I0407 13:51:09.777933 1220973 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 13:51:12.329139 1220973 system_pods.go:59] 8 kube-system pods found
	I0407 13:51:12.329192 1220973 system_pods.go:61] "coredns-668d6bf9bc-l8dqs" [d22da438-7207-4ea5-886e-4877202a0503] Running
	I0407 13:51:12.329198 1220973 system_pods.go:61] "etcd-default-k8s-diff-port-405061" [616d0285-308b-4f87-a840-2d6c4aafa12b] Running
	I0407 13:51:12.329204 1220973 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-405061" [2bccbc06-ecc1-4a5c-80b4-1b1287cad2a8] Running
	I0407 13:51:12.329209 1220973 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-405061" [f6ef48bd-c717-4a62-90b5-2ba0d395dc23] Running
	I0407 13:51:12.329213 1220973 system_pods.go:61] "kube-proxy-59k7q" [fd139676-0ec9-4996-8f72-b2cc18db7c58] Running
	I0407 13:51:12.329217 1220973 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-405061" [7691ca99-87e9-4a20-8e8a-ad956b63c8f1] Running
	I0407 13:51:12.329223 1220973 system_pods.go:61] "metrics-server-f79f97bbb-m78vh" [29d4eed6-dbb9-4a42-a4ed-644adfc6c32e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0407 13:51:12.329228 1220973 system_pods.go:61] "storage-provisioner" [81745e26-f62c-431f-a4ed-8919d519705f] Running
	I0407 13:51:12.329249 1220973 system_pods.go:74] duration metric: took 4.171177863s to wait for pod list to return data ...
	I0407 13:51:12.329258 1220973 default_sa.go:34] waiting for default service account to be created ...
	I0407 13:51:12.334358 1220973 default_sa.go:45] found service account: "default"
	I0407 13:51:12.334389 1220973 default_sa.go:55] duration metric: took 5.124791ms for default service account to be created ...
	I0407 13:51:12.334400 1220973 system_pods.go:116] waiting for k8s-apps to be running ...
	I0407 13:51:12.338677 1220973 system_pods.go:86] 8 kube-system pods found
	I0407 13:51:12.338728 1220973 system_pods.go:89] "coredns-668d6bf9bc-l8dqs" [d22da438-7207-4ea5-886e-4877202a0503] Running
	I0407 13:51:12.338736 1220973 system_pods.go:89] "etcd-default-k8s-diff-port-405061" [616d0285-308b-4f87-a840-2d6c4aafa12b] Running
	I0407 13:51:12.338741 1220973 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-405061" [2bccbc06-ecc1-4a5c-80b4-1b1287cad2a8] Running
	I0407 13:51:12.338747 1220973 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-405061" [f6ef48bd-c717-4a62-90b5-2ba0d395dc23] Running
	I0407 13:51:12.338751 1220973 system_pods.go:89] "kube-proxy-59k7q" [fd139676-0ec9-4996-8f72-b2cc18db7c58] Running
	I0407 13:51:12.338756 1220973 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-405061" [7691ca99-87e9-4a20-8e8a-ad956b63c8f1] Running
	I0407 13:51:12.338765 1220973 system_pods.go:89] "metrics-server-f79f97bbb-m78vh" [29d4eed6-dbb9-4a42-a4ed-644adfc6c32e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0407 13:51:12.338770 1220973 system_pods.go:89] "storage-provisioner" [81745e26-f62c-431f-a4ed-8919d519705f] Running
	I0407 13:51:12.338782 1220973 system_pods.go:126] duration metric: took 4.37545ms to wait for k8s-apps to be running ...
	I0407 13:51:12.338792 1220973 system_svc.go:44] waiting for kubelet service to be running ....
	I0407 13:51:12.338848 1220973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:51:12.359474 1220973 system_svc.go:56] duration metric: took 20.668679ms WaitForService to wait for kubelet
	I0407 13:51:12.359522 1220973 kubeadm.go:582] duration metric: took 4m25.997559577s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 13:51:12.359551 1220973 node_conditions.go:102] verifying NodePressure condition ...
	I0407 13:51:12.363017 1220973 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 13:51:12.363064 1220973 node_conditions.go:123] node cpu capacity is 2
	I0407 13:51:12.363082 1220973 node_conditions.go:105] duration metric: took 3.524897ms to run NodePressure ...
	I0407 13:51:12.363101 1220973 start.go:241] waiting for startup goroutines ...
	I0407 13:51:12.363118 1220973 start.go:246] waiting for cluster config update ...
	I0407 13:51:12.363136 1220973 start.go:255] writing updated cluster config ...
	I0407 13:51:12.363481 1220973 ssh_runner.go:195] Run: rm -f paused
	I0407 13:51:12.432521 1220973 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0407 13:51:12.436018 1220973 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-405061" cluster and "default" namespace by default
	I0407 13:51:11.623233 1229086 pod_ready.go:103] pod "coredns-668d6bf9bc-wtbtr" in "kube-system" namespace has status "Ready":"False"
	I0407 13:51:12.620924 1229086 pod_ready.go:93] pod "coredns-668d6bf9bc-wtbtr" in "kube-system" namespace has status "Ready":"True"
	I0407 13:51:12.620966 1229086 pod_ready.go:82] duration metric: took 13.006345622s for pod "coredns-668d6bf9bc-wtbtr" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:12.620982 1229086 pod_ready.go:79] waiting up to 15m0s for pod "etcd-flannel-056871" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:12.626794 1229086 pod_ready.go:93] pod "etcd-flannel-056871" in "kube-system" namespace has status "Ready":"True"
	I0407 13:51:12.626826 1229086 pod_ready.go:82] duration metric: took 5.835446ms for pod "etcd-flannel-056871" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:12.626842 1229086 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-flannel-056871" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:12.631694 1229086 pod_ready.go:93] pod "kube-apiserver-flannel-056871" in "kube-system" namespace has status "Ready":"True"
	I0407 13:51:12.631725 1229086 pod_ready.go:82] duration metric: took 4.874755ms for pod "kube-apiserver-flannel-056871" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:12.631742 1229086 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-flannel-056871" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:12.636423 1229086 pod_ready.go:93] pod "kube-controller-manager-flannel-056871" in "kube-system" namespace has status "Ready":"True"
	I0407 13:51:12.636454 1229086 pod_ready.go:82] duration metric: took 4.705104ms for pod "kube-controller-manager-flannel-056871" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:12.636468 1229086 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-smtjx" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:12.641084 1229086 pod_ready.go:93] pod "kube-proxy-smtjx" in "kube-system" namespace has status "Ready":"True"
	I0407 13:51:12.641117 1229086 pod_ready.go:82] duration metric: took 4.640592ms for pod "kube-proxy-smtjx" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:12.641134 1229086 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-flannel-056871" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:13.019186 1229086 pod_ready.go:93] pod "kube-scheduler-flannel-056871" in "kube-system" namespace has status "Ready":"True"
	I0407 13:51:13.019227 1229086 pod_ready.go:82] duration metric: took 378.081871ms for pod "kube-scheduler-flannel-056871" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:13.019245 1229086 pod_ready.go:39] duration metric: took 13.753874082s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 13:51:13.019269 1229086 api_server.go:52] waiting for apiserver process to appear ...
	I0407 13:51:13.019345 1229086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:51:13.036816 1229086 api_server.go:72] duration metric: took 23.18356003s to wait for apiserver process to appear ...
	I0407 13:51:13.036852 1229086 api_server.go:88] waiting for apiserver healthz status ...
	I0407 13:51:13.036873 1229086 api_server.go:253] Checking apiserver healthz at https://192.168.61.247:8443/healthz ...
	I0407 13:51:13.043471 1229086 api_server.go:279] https://192.168.61.247:8443/healthz returned 200:
	ok
	I0407 13:51:13.044789 1229086 api_server.go:141] control plane version: v1.32.2
	I0407 13:51:13.044824 1229086 api_server.go:131] duration metric: took 7.963841ms to wait for apiserver health ...
	I0407 13:51:13.044837 1229086 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 13:51:13.221807 1229086 system_pods.go:59] 7 kube-system pods found
	I0407 13:51:13.221847 1229086 system_pods.go:61] "coredns-668d6bf9bc-wtbtr" [b2923c78-dfb4-45f4-8d9c-c704efa16770] Running
	I0407 13:51:13.221852 1229086 system_pods.go:61] "etcd-flannel-056871" [a483a329-3322-45e9-a8d2-71767ab99f59] Running
	I0407 13:51:13.221856 1229086 system_pods.go:61] "kube-apiserver-flannel-056871" [f3043dae-0f67-4698-be96-b24b62b28437] Running
	I0407 13:51:13.221861 1229086 system_pods.go:61] "kube-controller-manager-flannel-056871" [ac2429e7-b9c0-4ef6-b9a2-d6213321fed6] Running
	I0407 13:51:13.221866 1229086 system_pods.go:61] "kube-proxy-smtjx" [7a3177c3-d1cd-45b3-ae8a-fc2046381c19] Running
	I0407 13:51:13.221871 1229086 system_pods.go:61] "kube-scheduler-flannel-056871" [ef4e2bc1-8a80-41ea-b563-3755728b1363] Running
	I0407 13:51:13.221877 1229086 system_pods.go:61] "storage-provisioner" [1e8fc621-4ec4-4579-bc5e-f59b83a0394d] Running
	I0407 13:51:13.221885 1229086 system_pods.go:74] duration metric: took 177.040439ms to wait for pod list to return data ...
	I0407 13:51:13.221896 1229086 default_sa.go:34] waiting for default service account to be created ...
	I0407 13:51:13.421264 1229086 default_sa.go:45] found service account: "default"
	I0407 13:51:13.421310 1229086 default_sa.go:55] duration metric: took 199.406683ms for default service account to be created ...
	I0407 13:51:13.421325 1229086 system_pods.go:116] waiting for k8s-apps to be running ...
	I0407 13:51:13.619341 1229086 system_pods.go:86] 7 kube-system pods found
	I0407 13:51:13.619373 1229086 system_pods.go:89] "coredns-668d6bf9bc-wtbtr" [b2923c78-dfb4-45f4-8d9c-c704efa16770] Running
	I0407 13:51:13.619380 1229086 system_pods.go:89] "etcd-flannel-056871" [a483a329-3322-45e9-a8d2-71767ab99f59] Running
	I0407 13:51:13.619383 1229086 system_pods.go:89] "kube-apiserver-flannel-056871" [f3043dae-0f67-4698-be96-b24b62b28437] Running
	I0407 13:51:13.619388 1229086 system_pods.go:89] "kube-controller-manager-flannel-056871" [ac2429e7-b9c0-4ef6-b9a2-d6213321fed6] Running
	I0407 13:51:13.619393 1229086 system_pods.go:89] "kube-proxy-smtjx" [7a3177c3-d1cd-45b3-ae8a-fc2046381c19] Running
	I0407 13:51:13.619397 1229086 system_pods.go:89] "kube-scheduler-flannel-056871" [ef4e2bc1-8a80-41ea-b563-3755728b1363] Running
	I0407 13:51:13.619402 1229086 system_pods.go:89] "storage-provisioner" [1e8fc621-4ec4-4579-bc5e-f59b83a0394d] Running
	I0407 13:51:13.619410 1229086 system_pods.go:126] duration metric: took 198.077767ms to wait for k8s-apps to be running ...
	I0407 13:51:13.619419 1229086 system_svc.go:44] waiting for kubelet service to be running ....
	I0407 13:51:13.619467 1229086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:51:13.639988 1229086 system_svc.go:56] duration metric: took 20.552018ms WaitForService to wait for kubelet
	I0407 13:51:13.640033 1229086 kubeadm.go:582] duration metric: took 23.786782845s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 13:51:13.640057 1229086 node_conditions.go:102] verifying NodePressure condition ...
	I0407 13:51:13.819686 1229086 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 13:51:13.819725 1229086 node_conditions.go:123] node cpu capacity is 2
	I0407 13:51:13.819743 1229086 node_conditions.go:105] duration metric: took 179.679142ms to run NodePressure ...
	I0407 13:51:13.819760 1229086 start.go:241] waiting for startup goroutines ...
	I0407 13:51:13.819768 1229086 start.go:246] waiting for cluster config update ...
	I0407 13:51:13.819782 1229086 start.go:255] writing updated cluster config ...
	I0407 13:51:13.820097 1229086 ssh_runner.go:195] Run: rm -f paused
	I0407 13:51:13.884127 1229086 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0407 13:51:13.887015 1229086 out.go:177] * Done! kubectl is now configured to use "flannel-056871" cluster and "default" namespace by default
	I0407 13:51:10.159135 1230577 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0407 13:51:10.170594 1230577 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0407 13:51:10.193430 1230577 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0407 13:51:10.193528 1230577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:51:10.193556 1230577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-056871 minikube.k8s.io/updated_at=2025_04_07T13_51_10_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=33e6edc58d2014d70e908473920ef4ac8eae1e43 minikube.k8s.io/name=bridge-056871 minikube.k8s.io/primary=true
	I0407 13:51:10.213523 1230577 ops.go:34] apiserver oom_adj: -16
	I0407 13:51:10.404665 1230577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:51:10.905503 1230577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:51:11.405109 1230577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:51:11.905139 1230577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:51:12.404791 1230577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:51:12.905795 1230577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:51:13.405662 1230577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:51:13.904970 1230577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:51:14.405510 1230577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:51:14.503334 1230577 kubeadm.go:1113] duration metric: took 4.309889147s to wait for elevateKubeSystemPrivileges
	I0407 13:51:14.503378 1230577 kubeadm.go:394] duration metric: took 15.181607716s to StartCluster
	I0407 13:51:14.503406 1230577 settings.go:142] acquiring lock: {Name:mk19c4dc5d7992642f3fe5ca0bdb3ea65af01b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:51:14.503500 1230577 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20602-1162386/kubeconfig
	I0407 13:51:14.504964 1230577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/kubeconfig: {Name:mk712863958f7dbf2601dd82dc9ca7bea42ef42f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:51:14.505295 1230577 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.60 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0407 13:51:14.505337 1230577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0407 13:51:14.505352 1230577 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0407 13:51:14.505468 1230577 addons.go:69] Setting storage-provisioner=true in profile "bridge-056871"
	I0407 13:51:14.505488 1230577 addons.go:238] Setting addon storage-provisioner=true in "bridge-056871"
	I0407 13:51:14.505493 1230577 addons.go:69] Setting default-storageclass=true in profile "bridge-056871"
	I0407 13:51:14.505526 1230577 host.go:66] Checking if "bridge-056871" exists ...
	I0407 13:51:14.505535 1230577 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-056871"
	I0407 13:51:14.505611 1230577 config.go:182] Loaded profile config "bridge-056871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:51:14.506128 1230577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:51:14.506169 1230577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:51:14.506137 1230577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:51:14.506262 1230577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:51:14.507179 1230577 out.go:177] * Verifying Kubernetes components...
	I0407 13:51:14.508939 1230577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:51:14.528710 1230577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46503
	I0407 13:51:14.529591 1230577 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:51:14.530356 1230577 main.go:141] libmachine: Using API Version  1
	I0407 13:51:14.530392 1230577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:51:14.531236 1230577 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:51:14.531991 1230577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:51:14.532040 1230577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:51:14.532162 1230577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37415
	I0407 13:51:14.532729 1230577 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:51:14.533287 1230577 main.go:141] libmachine: Using API Version  1
	I0407 13:51:14.533321 1230577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:51:14.533838 1230577 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:51:14.534079 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetState
	I0407 13:51:14.538745 1230577 addons.go:238] Setting addon default-storageclass=true in "bridge-056871"
	I0407 13:51:14.538806 1230577 host.go:66] Checking if "bridge-056871" exists ...
	I0407 13:51:14.539192 1230577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:51:14.539253 1230577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:51:14.554829 1230577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41253
	I0407 13:51:14.555631 1230577 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:51:14.556333 1230577 main.go:141] libmachine: Using API Version  1
	I0407 13:51:14.556382 1230577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:51:14.556874 1230577 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:51:14.557119 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetState
	I0407 13:51:14.559774 1230577 main.go:141] libmachine: (bridge-056871) Calling .DriverName
	I0407 13:51:14.561700 1230577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34847
	I0407 13:51:14.562114 1230577 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 13:51:14.562326 1230577 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:51:14.562910 1230577 main.go:141] libmachine: Using API Version  1
	I0407 13:51:14.562944 1230577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:51:14.563469 1230577 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:51:14.563736 1230577 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 13:51:14.563756 1230577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0407 13:51:14.563775 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHHostname
	I0407 13:51:14.564155 1230577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:51:14.564224 1230577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:51:14.567924 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:51:14.568724 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:bridge-056871 Clientid:01:52:54:00:d9:a5:38}
	I0407 13:51:14.568762 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:51:14.569160 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHPort
	I0407 13:51:14.569440 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHKeyPath
	I0407 13:51:14.569994 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHUsername
	I0407 13:51:14.570256 1230577 sshutil.go:53] new ssh client: &{IP:192.168.50.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/bridge-056871/id_rsa Username:docker}
	I0407 13:51:14.583408 1230577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45805
	I0407 13:51:14.584133 1230577 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:51:14.584735 1230577 main.go:141] libmachine: Using API Version  1
	I0407 13:51:14.584766 1230577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:51:14.585231 1230577 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:51:14.585481 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetState
	I0407 13:51:14.587558 1230577 main.go:141] libmachine: (bridge-056871) Calling .DriverName
	I0407 13:51:14.587893 1230577 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0407 13:51:14.587916 1230577 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0407 13:51:14.587938 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHHostname
	I0407 13:51:14.591985 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:51:14.592534 1230577 main.go:141] libmachine: (bridge-056871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a5:38", ip: ""} in network mk-bridge-056871: {Iface:virbr2 ExpiryTime:2025-04-07 14:50:42 +0000 UTC Type:0 Mac:52:54:00:d9:a5:38 Iaid: IPaddr:192.168.50.60 Prefix:24 Hostname:bridge-056871 Clientid:01:52:54:00:d9:a5:38}
	I0407 13:51:14.592572 1230577 main.go:141] libmachine: (bridge-056871) DBG | domain bridge-056871 has defined IP address 192.168.50.60 and MAC address 52:54:00:d9:a5:38 in network mk-bridge-056871
	I0407 13:51:14.592825 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHPort
	I0407 13:51:14.593111 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHKeyPath
	I0407 13:51:14.593318 1230577 main.go:141] libmachine: (bridge-056871) Calling .GetSSHUsername
	I0407 13:51:14.593529 1230577 sshutil.go:53] new ssh client: &{IP:192.168.50.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/bridge-056871/id_rsa Username:docker}
	I0407 13:51:14.699702 1230577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0407 13:51:14.728176 1230577 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:51:14.870821 1230577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 13:51:14.891756 1230577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0407 13:51:15.124664 1230577 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0407 13:51:15.126051 1230577 node_ready.go:35] waiting up to 15m0s for node "bridge-056871" to be "Ready" ...
	I0407 13:51:15.138090 1230577 node_ready.go:49] node "bridge-056871" has status "Ready":"True"
	I0407 13:51:15.138128 1230577 node_ready.go:38] duration metric: took 12.039283ms for node "bridge-056871" to be "Ready" ...
	I0407 13:51:15.138140 1230577 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 13:51:15.144672 1230577 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-7hzbw" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:15.633193 1230577 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-056871" context rescaled to 1 replicas
	I0407 13:51:15.674183 1230577 main.go:141] libmachine: Making call to close driver server
	I0407 13:51:15.674220 1230577 main.go:141] libmachine: (bridge-056871) Calling .Close
	I0407 13:51:15.674224 1230577 main.go:141] libmachine: Making call to close driver server
	I0407 13:51:15.674246 1230577 main.go:141] libmachine: (bridge-056871) Calling .Close
	I0407 13:51:15.674592 1230577 main.go:141] libmachine: Successfully made call to close driver server
	I0407 13:51:15.674621 1230577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 13:51:15.674632 1230577 main.go:141] libmachine: Making call to close driver server
	I0407 13:51:15.674642 1230577 main.go:141] libmachine: (bridge-056871) Calling .Close
	I0407 13:51:15.674658 1230577 main.go:141] libmachine: Successfully made call to close driver server
	I0407 13:51:15.674678 1230577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 13:51:15.674692 1230577 main.go:141] libmachine: Making call to close driver server
	I0407 13:51:15.674703 1230577 main.go:141] libmachine: (bridge-056871) Calling .Close
	I0407 13:51:15.674707 1230577 main.go:141] libmachine: (bridge-056871) DBG | Closing plugin on server side
	I0407 13:51:15.674897 1230577 main.go:141] libmachine: Successfully made call to close driver server
	I0407 13:51:15.674919 1230577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 13:51:15.675007 1230577 main.go:141] libmachine: Successfully made call to close driver server
	I0407 13:51:15.675028 1230577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 13:51:15.675048 1230577 main.go:141] libmachine: (bridge-056871) DBG | Closing plugin on server side
	I0407 13:51:15.704715 1230577 main.go:141] libmachine: Making call to close driver server
	I0407 13:51:15.704742 1230577 main.go:141] libmachine: (bridge-056871) Calling .Close
	I0407 13:51:15.705311 1230577 main.go:141] libmachine: (bridge-056871) DBG | Closing plugin on server side
	I0407 13:51:15.705350 1230577 main.go:141] libmachine: Successfully made call to close driver server
	I0407 13:51:15.705370 1230577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 13:51:15.707869 1230577 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0407 13:51:15.709482 1230577 addons.go:514] duration metric: took 1.204112692s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0407 13:51:17.151557 1230577 pod_ready.go:103] pod "coredns-668d6bf9bc-7hzbw" in "kube-system" namespace has status "Ready":"False"
	I0407 13:51:19.651176 1230577 pod_ready.go:103] pod "coredns-668d6bf9bc-7hzbw" in "kube-system" namespace has status "Ready":"False"
	I0407 13:51:22.152477 1230577 pod_ready.go:103] pod "coredns-668d6bf9bc-7hzbw" in "kube-system" namespace has status "Ready":"False"
	I0407 13:51:24.652542 1230577 pod_ready.go:103] pod "coredns-668d6bf9bc-7hzbw" in "kube-system" namespace has status "Ready":"False"
	I0407 13:51:26.152821 1230577 pod_ready.go:98] pod "coredns-668d6bf9bc-7hzbw" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-07 13:51:26 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-07 13:51:14 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-07 13:51:14 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-07 13:51:14 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-07 13:51:14 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.50.60 HostIPs:[{IP:192.168.50.
60}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-04-07 13:51:14 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-04-07 13:51:15 +0000 UTC,FinishedAt:2025-04-07 13:51:25 +0000 UTC,ContainerID:cri-o://ca05d83a89ad284ae2de0937d1b89ae0dc71b45c0edaa12e93727b3e5adc2247,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://ca05d83a89ad284ae2de0937d1b89ae0dc71b45c0edaa12e93727b3e5adc2247 Started:0xc0019098a0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc00262bf00} {Name:kube-api-access-tcbc5 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc00262bf10}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0407 13:51:26.152864 1230577 pod_ready.go:82] duration metric: took 11.008143482s for pod "coredns-668d6bf9bc-7hzbw" in "kube-system" namespace to be "Ready" ...
	E0407 13:51:26.152881 1230577 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-668d6bf9bc-7hzbw" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-07 13:51:26 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-07 13:51:14 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-07 13:51:14 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-07 13:51:14 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-07 13:51:14 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.5
0.60 HostIPs:[{IP:192.168.50.60}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-04-07 13:51:14 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-04-07 13:51:15 +0000 UTC,FinishedAt:2025-04-07 13:51:25 +0000 UTC,ContainerID:cri-o://ca05d83a89ad284ae2de0937d1b89ae0dc71b45c0edaa12e93727b3e5adc2247,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://ca05d83a89ad284ae2de0937d1b89ae0dc71b45c0edaa12e93727b3e5adc2247 Started:0xc0019098a0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc00262bf00} {Name:kube-api-access-tcbc5 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc00262bf10}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0407 13:51:26.152921 1230577 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-nld4f" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:26.158910 1230577 pod_ready.go:93] pod "coredns-668d6bf9bc-nld4f" in "kube-system" namespace has status "Ready":"True"
	I0407 13:51:26.158944 1230577 pod_ready.go:82] duration metric: took 6.010021ms for pod "coredns-668d6bf9bc-nld4f" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:26.158961 1230577 pod_ready.go:79] waiting up to 15m0s for pod "etcd-bridge-056871" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:26.167441 1230577 pod_ready.go:93] pod "etcd-bridge-056871" in "kube-system" namespace has status "Ready":"True"
	I0407 13:51:26.167469 1230577 pod_ready.go:82] duration metric: took 8.500134ms for pod "etcd-bridge-056871" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:26.167479 1230577 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-bridge-056871" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:26.173495 1230577 pod_ready.go:93] pod "kube-apiserver-bridge-056871" in "kube-system" namespace has status "Ready":"True"
	I0407 13:51:26.173537 1230577 pod_ready.go:82] duration metric: took 6.051921ms for pod "kube-apiserver-bridge-056871" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:26.173549 1230577 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-bridge-056871" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:26.178528 1230577 pod_ready.go:93] pod "kube-controller-manager-bridge-056871" in "kube-system" namespace has status "Ready":"True"
	I0407 13:51:26.178554 1230577 pod_ready.go:82] duration metric: took 4.998894ms for pod "kube-controller-manager-bridge-056871" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:26.178567 1230577 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-2ftsv" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:26.549366 1230577 pod_ready.go:93] pod "kube-proxy-2ftsv" in "kube-system" namespace has status "Ready":"True"
	I0407 13:51:26.549405 1230577 pod_ready.go:82] duration metric: took 370.829414ms for pod "kube-proxy-2ftsv" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:26.549421 1230577 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-bridge-056871" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:26.949220 1230577 pod_ready.go:93] pod "kube-scheduler-bridge-056871" in "kube-system" namespace has status "Ready":"True"
	I0407 13:51:26.949257 1230577 pod_ready.go:82] duration metric: took 399.827446ms for pod "kube-scheduler-bridge-056871" in "kube-system" namespace to be "Ready" ...
	I0407 13:51:26.949270 1230577 pod_ready.go:39] duration metric: took 11.811111346s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 13:51:26.949297 1230577 api_server.go:52] waiting for apiserver process to appear ...
	I0407 13:51:26.949366 1230577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:51:26.965744 1230577 api_server.go:72] duration metric: took 12.46041284s to wait for apiserver process to appear ...
	I0407 13:51:26.965775 1230577 api_server.go:88] waiting for apiserver healthz status ...
	I0407 13:51:26.965799 1230577 api_server.go:253] Checking apiserver healthz at https://192.168.50.60:8443/healthz ...
	I0407 13:51:26.971830 1230577 api_server.go:279] https://192.168.50.60:8443/healthz returned 200:
	ok
	I0407 13:51:26.973406 1230577 api_server.go:141] control plane version: v1.32.2
	I0407 13:51:26.973456 1230577 api_server.go:131] duration metric: took 7.670225ms to wait for apiserver health ...
	I0407 13:51:26.973471 1230577 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 13:51:27.151802 1230577 system_pods.go:59] 7 kube-system pods found
	I0407 13:51:27.151854 1230577 system_pods.go:61] "coredns-668d6bf9bc-nld4f" [3887bc68-10af-41c6-bf18-2deca678221c] Running
	I0407 13:51:27.151864 1230577 system_pods.go:61] "etcd-bridge-056871" [6fbbd69f-41ab-4e93-adfe-653b3df252db] Running
	I0407 13:51:27.151871 1230577 system_pods.go:61] "kube-apiserver-bridge-056871" [ca1ffe69-93d4-4bd9-a8ce-459be6f7f9c5] Running
	I0407 13:51:27.151877 1230577 system_pods.go:61] "kube-controller-manager-bridge-056871" [c3b30836-3a6c-4248-a79b-28e7586e6353] Running
	I0407 13:51:27.151882 1230577 system_pods.go:61] "kube-proxy-2ftsv" [8e02d336-d190-4428-8bdd-88bf28e0b4bc] Running
	I0407 13:51:27.151887 1230577 system_pods.go:61] "kube-scheduler-bridge-056871" [b502f2c1-5cdc-49e0-b66d-ae0a1363f03e] Running
	I0407 13:51:27.151892 1230577 system_pods.go:61] "storage-provisioner" [96d93b22-4965-46a2-83c8-d7742fa76b6a] Running
	I0407 13:51:27.151902 1230577 system_pods.go:74] duration metric: took 178.422752ms to wait for pod list to return data ...
	I0407 13:51:27.151928 1230577 default_sa.go:34] waiting for default service account to be created ...
	I0407 13:51:27.349651 1230577 default_sa.go:45] found service account: "default"
	I0407 13:51:27.349688 1230577 default_sa.go:55] duration metric: took 197.749966ms for default service account to be created ...
	I0407 13:51:27.349737 1230577 system_pods.go:116] waiting for k8s-apps to be running ...
	I0407 13:51:27.559111 1230577 system_pods.go:86] 7 kube-system pods found
	I0407 13:51:27.559145 1230577 system_pods.go:89] "coredns-668d6bf9bc-nld4f" [3887bc68-10af-41c6-bf18-2deca678221c] Running
	I0407 13:51:27.559151 1230577 system_pods.go:89] "etcd-bridge-056871" [6fbbd69f-41ab-4e93-adfe-653b3df252db] Running
	I0407 13:51:27.559155 1230577 system_pods.go:89] "kube-apiserver-bridge-056871" [ca1ffe69-93d4-4bd9-a8ce-459be6f7f9c5] Running
	I0407 13:51:27.559159 1230577 system_pods.go:89] "kube-controller-manager-bridge-056871" [c3b30836-3a6c-4248-a79b-28e7586e6353] Running
	I0407 13:51:27.559164 1230577 system_pods.go:89] "kube-proxy-2ftsv" [8e02d336-d190-4428-8bdd-88bf28e0b4bc] Running
	I0407 13:51:27.559167 1230577 system_pods.go:89] "kube-scheduler-bridge-056871" [b502f2c1-5cdc-49e0-b66d-ae0a1363f03e] Running
	I0407 13:51:27.559170 1230577 system_pods.go:89] "storage-provisioner" [96d93b22-4965-46a2-83c8-d7742fa76b6a] Running
	I0407 13:51:27.559177 1230577 system_pods.go:126] duration metric: took 209.432386ms to wait for k8s-apps to be running ...
	I0407 13:51:27.559185 1230577 system_svc.go:44] waiting for kubelet service to be running ....
	I0407 13:51:27.559240 1230577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:51:27.575563 1230577 system_svc.go:56] duration metric: took 16.359151ms WaitForService to wait for kubelet
	I0407 13:51:27.575606 1230577 kubeadm.go:582] duration metric: took 13.070278894s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 13:51:27.575627 1230577 node_conditions.go:102] verifying NodePressure condition ...
	I0407 13:51:27.749256 1230577 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 13:51:27.749291 1230577 node_conditions.go:123] node cpu capacity is 2
	I0407 13:51:27.749305 1230577 node_conditions.go:105] duration metric: took 173.672077ms to run NodePressure ...
	I0407 13:51:27.749318 1230577 start.go:241] waiting for startup goroutines ...
	I0407 13:51:27.749326 1230577 start.go:246] waiting for cluster config update ...
	I0407 13:51:27.749341 1230577 start.go:255] writing updated cluster config ...
	I0407 13:51:27.749652 1230577 ssh_runner.go:195] Run: rm -f paused
	I0407 13:51:27.803571 1230577 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0407 13:51:27.807117 1230577 out.go:177] * Done! kubectl is now configured to use "bridge-056871" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 07 13:59:03 old-k8s-version-435730 crio[627]: time="2025-04-07 13:59:03.864464814Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744034343864443943,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bc032281-8433-4197-9a42-48ad9d12360c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:59:03 old-k8s-version-435730 crio[627]: time="2025-04-07 13:59:03.864922754Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=62a8c241-3bbd-4be8-a28c-348169f0936b name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:59:03 old-k8s-version-435730 crio[627]: time="2025-04-07 13:59:03.864982926Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=62a8c241-3bbd-4be8-a28c-348169f0936b name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:59:03 old-k8s-version-435730 crio[627]: time="2025-04-07 13:59:03.865015797Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=62a8c241-3bbd-4be8-a28c-348169f0936b name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:59:03 old-k8s-version-435730 crio[627]: time="2025-04-07 13:59:03.895200688Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dce1f354-4c00-4a7d-b545-54d8e41dd44e name=/runtime.v1.RuntimeService/Version
	Apr 07 13:59:03 old-k8s-version-435730 crio[627]: time="2025-04-07 13:59:03.895294193Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dce1f354-4c00-4a7d-b545-54d8e41dd44e name=/runtime.v1.RuntimeService/Version
	Apr 07 13:59:03 old-k8s-version-435730 crio[627]: time="2025-04-07 13:59:03.896793853Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4e18f3db-8acc-49e5-aa3c-b04bfeca17df name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:59:03 old-k8s-version-435730 crio[627]: time="2025-04-07 13:59:03.897165727Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744034343897145523,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4e18f3db-8acc-49e5-aa3c-b04bfeca17df name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:59:03 old-k8s-version-435730 crio[627]: time="2025-04-07 13:59:03.897706106Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=19851831-63e6-4d05-b2eb-d381284fea2d name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:59:03 old-k8s-version-435730 crio[627]: time="2025-04-07 13:59:03.897769626Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=19851831-63e6-4d05-b2eb-d381284fea2d name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:59:03 old-k8s-version-435730 crio[627]: time="2025-04-07 13:59:03.897804523Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=19851831-63e6-4d05-b2eb-d381284fea2d name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:59:03 old-k8s-version-435730 crio[627]: time="2025-04-07 13:59:03.932404448Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2dfaf35f-ff67-498f-beb0-2053bb0d3416 name=/runtime.v1.RuntimeService/Version
	Apr 07 13:59:03 old-k8s-version-435730 crio[627]: time="2025-04-07 13:59:03.932495251Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2dfaf35f-ff67-498f-beb0-2053bb0d3416 name=/runtime.v1.RuntimeService/Version
	Apr 07 13:59:03 old-k8s-version-435730 crio[627]: time="2025-04-07 13:59:03.933985082Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ca7f52c9-3d03-4a30-99d1-803ce814eeba name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:59:03 old-k8s-version-435730 crio[627]: time="2025-04-07 13:59:03.934353395Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744034343934334067,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ca7f52c9-3d03-4a30-99d1-803ce814eeba name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:59:03 old-k8s-version-435730 crio[627]: time="2025-04-07 13:59:03.934982692Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=92bc8acc-3a73-4e34-8d90-074e437ceb2d name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:59:03 old-k8s-version-435730 crio[627]: time="2025-04-07 13:59:03.935041638Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=92bc8acc-3a73-4e34-8d90-074e437ceb2d name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:59:03 old-k8s-version-435730 crio[627]: time="2025-04-07 13:59:03.935082063Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=92bc8acc-3a73-4e34-8d90-074e437ceb2d name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:59:03 old-k8s-version-435730 crio[627]: time="2025-04-07 13:59:03.965387935Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9a8e39ed-9646-452a-9adb-6c5044fe9721 name=/runtime.v1.RuntimeService/Version
	Apr 07 13:59:03 old-k8s-version-435730 crio[627]: time="2025-04-07 13:59:03.965454082Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9a8e39ed-9646-452a-9adb-6c5044fe9721 name=/runtime.v1.RuntimeService/Version
	Apr 07 13:59:03 old-k8s-version-435730 crio[627]: time="2025-04-07 13:59:03.966836076Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=34a80478-27b6-473a-9c01-d7be46a6270a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:59:03 old-k8s-version-435730 crio[627]: time="2025-04-07 13:59:03.967193192Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744034343967174399,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=34a80478-27b6-473a-9c01-d7be46a6270a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:59:03 old-k8s-version-435730 crio[627]: time="2025-04-07 13:59:03.967737282Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e2a9b7f-d0d2-48dc-bd18-f7335be46d55 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:59:03 old-k8s-version-435730 crio[627]: time="2025-04-07 13:59:03.967784280Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e2a9b7f-d0d2-48dc-bd18-f7335be46d55 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:59:03 old-k8s-version-435730 crio[627]: time="2025-04-07 13:59:03.967817252Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0e2a9b7f-d0d2-48dc-bd18-f7335be46d55 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr 7 13:35] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054242] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042193] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.063295] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.360663] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.644180] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.768056] systemd-fstab-generator[555]: Ignoring "noauto" option for root device
	[  +0.063980] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066857] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.213937] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.125997] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.267658] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +7.911500] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.062381] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.305323] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[ +10.593685] kauditd_printk_skb: 46 callbacks suppressed
	[Apr 7 13:39] systemd-fstab-generator[4897]: Ignoring "noauto" option for root device
	[Apr 7 13:41] systemd-fstab-generator[5176]: Ignoring "noauto" option for root device
	[  +0.065808] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 13:59:04 up 23 min,  0 users,  load average: 0.03, 0.02, 0.04
	Linux old-k8s-version-435730 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 07 13:58:59 old-k8s-version-435730 kubelet[7047]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Apr 07 13:58:59 old-k8s-version-435730 kubelet[7047]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Apr 07 13:58:59 old-k8s-version-435730 kubelet[7047]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Apr 07 13:58:59 old-k8s-version-435730 kubelet[7047]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000a5aef0)
	Apr 07 13:58:59 old-k8s-version-435730 kubelet[7047]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Apr 07 13:58:59 old-k8s-version-435730 kubelet[7047]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000cafef0, 0x4f0ac20, 0xc000115400, 0x1, 0xc0000b6060)
	Apr 07 13:58:59 old-k8s-version-435730 kubelet[7047]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Apr 07 13:58:59 old-k8s-version-435730 kubelet[7047]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000a0a1c0, 0xc0000b6060)
	Apr 07 13:58:59 old-k8s-version-435730 kubelet[7047]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Apr 07 13:58:59 old-k8s-version-435730 kubelet[7047]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Apr 07 13:58:59 old-k8s-version-435730 kubelet[7047]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Apr 07 13:58:59 old-k8s-version-435730 kubelet[7047]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000ad25f0, 0xc0002e9fa0)
	Apr 07 13:58:59 old-k8s-version-435730 kubelet[7047]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Apr 07 13:58:59 old-k8s-version-435730 kubelet[7047]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Apr 07 13:58:59 old-k8s-version-435730 kubelet[7047]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Apr 07 13:58:59 old-k8s-version-435730 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 07 13:58:59 old-k8s-version-435730 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 07 13:58:59 old-k8s-version-435730 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 180.
	Apr 07 13:58:59 old-k8s-version-435730 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 07 13:58:59 old-k8s-version-435730 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 07 13:59:00 old-k8s-version-435730 kubelet[7056]: I0407 13:59:00.015290    7056 server.go:416] Version: v1.20.0
	Apr 07 13:59:00 old-k8s-version-435730 kubelet[7056]: I0407 13:59:00.015694    7056 server.go:837] Client rotation is on, will bootstrap in background
	Apr 07 13:59:00 old-k8s-version-435730 kubelet[7056]: I0407 13:59:00.017857    7056 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 07 13:59:00 old-k8s-version-435730 kubelet[7056]: W0407 13:59:00.018864    7056 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 07 13:59:00 old-k8s-version-435730 kubelet[7056]: I0407 13:59:00.019211    7056 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-435730 -n old-k8s-version-435730
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-435730 -n old-k8s-version-435730: exit status 2 (258.561217ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-435730" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (387.91s)

                                                
                                    

Test pass (275/322)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.19
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.17
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.17
12 TestDownloadOnly/v1.32.2/json-events 5.14
13 TestDownloadOnly/v1.32.2/preload-exists 0
17 TestDownloadOnly/v1.32.2/LogsDuration 0.08
18 TestDownloadOnly/v1.32.2/DeleteAll 0.18
19 TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds 0.18
21 TestBinaryMirror 0.71
22 TestOffline 88.69
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 202.55
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/FakeCredentials 11.62
35 TestAddons/parallel/Registry 17.92
37 TestAddons/parallel/InspektorGadget 11.94
38 TestAddons/parallel/MetricsServer 6.52
40 TestAddons/parallel/CSI 69.99
41 TestAddons/parallel/Headlamp 20.62
42 TestAddons/parallel/CloudSpanner 6.66
43 TestAddons/parallel/LocalPath 63.74
44 TestAddons/parallel/NvidiaDevicePlugin 5.64
45 TestAddons/parallel/Yakd 11.41
47 TestAddons/StoppedEnableDisable 91.41
48 TestCertOptions 66.97
49 TestCertExpiration 262.49
51 TestForceSystemdFlag 53.24
52 TestForceSystemdEnv 46.76
54 TestKVMDriverInstallOrUpdate 4.11
58 TestErrorSpam/setup 43.18
59 TestErrorSpam/start 0.41
60 TestErrorSpam/status 0.78
61 TestErrorSpam/pause 1.72
62 TestErrorSpam/unpause 1.94
63 TestErrorSpam/stop 4.85
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 57.02
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 33.69
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.68
75 TestFunctional/serial/CacheCmd/cache/add_local 2.17
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.05
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 389.39
84 TestFunctional/serial/ComponentHealth 0.08
85 TestFunctional/serial/LogsCmd 1.46
86 TestFunctional/serial/LogsFileCmd 1.55
87 TestFunctional/serial/InvalidService 4.73
89 TestFunctional/parallel/ConfigCmd 0.53
90 TestFunctional/parallel/DashboardCmd 140.76
91 TestFunctional/parallel/DryRun 0.37
92 TestFunctional/parallel/InternationalLanguage 0.21
93 TestFunctional/parallel/StatusCmd 1.21
97 TestFunctional/parallel/ServiceCmdConnect 11.78
98 TestFunctional/parallel/AddonsCmd 0.19
101 TestFunctional/parallel/SSHCmd 0.62
102 TestFunctional/parallel/CpCmd 1.72
104 TestFunctional/parallel/FileSync 0.23
105 TestFunctional/parallel/CertSync 1.72
109 TestFunctional/parallel/NodeLabels 0.09
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.65
113 TestFunctional/parallel/License 0.45
114 TestFunctional/parallel/Version/short 0.06
115 TestFunctional/parallel/Version/components 0.53
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
120 TestFunctional/parallel/ImageCommands/ImageBuild 4.07
121 TestFunctional/parallel/ImageCommands/Setup 1.78
122 TestFunctional/parallel/ServiceCmd/DeployApp 12.3
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.13
125 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.54
126 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 14.32
129 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.05
130 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.24
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.78
132 TestFunctional/parallel/ImageCommands/ImageRemove 0.61
133 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.11
134 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.89
135 TestFunctional/parallel/ServiceCmd/List 0.62
136 TestFunctional/parallel/ServiceCmd/JSONOutput 0.6
137 TestFunctional/parallel/ServiceCmd/HTTPS 0.48
138 TestFunctional/parallel/ServiceCmd/Format 0.42
139 TestFunctional/parallel/ServiceCmd/URL 0.43
140 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
141 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
142 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
143 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
144 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
148 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
149 TestFunctional/parallel/ProfileCmd/profile_not_create 0.53
150 TestFunctional/parallel/ProfileCmd/profile_list 0.45
151 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
152 TestFunctional/parallel/MountCmd/any-port 55.7
153 TestFunctional/parallel/MountCmd/specific-port 1.76
154 TestFunctional/parallel/MountCmd/VerifyCleanup 1.37
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 196.37
163 TestMultiControlPlane/serial/DeployApp 7.43
164 TestMultiControlPlane/serial/PingHostFromPods 1.26
165 TestMultiControlPlane/serial/AddWorkerNode 57.13
166 TestMultiControlPlane/serial/NodeLabels 0.07
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.91
168 TestMultiControlPlane/serial/CopyFile 13.99
169 TestMultiControlPlane/serial/StopSecondaryNode 91.71
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.69
171 TestMultiControlPlane/serial/RestartSecondaryNode 47.81
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.92
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 437.12
174 TestMultiControlPlane/serial/DeleteSecondaryNode 18.5
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.67
176 TestMultiControlPlane/serial/StopCluster 273.08
177 TestMultiControlPlane/serial/RestartCluster 126.26
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.67
179 TestMultiControlPlane/serial/AddSecondaryNode 77.92
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.91
184 TestJSONOutput/start/Command 55.07
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Command 0.7
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Command 0.64
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 7.36
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.22
212 TestMainNoArgs 0.05
213 TestMinikubeProfile 93.49
216 TestMountStart/serial/StartWithMountFirst 28.16
217 TestMountStart/serial/VerifyMountFirst 0.4
218 TestMountStart/serial/StartWithMountSecond 29.65
219 TestMountStart/serial/VerifyMountSecond 0.4
220 TestMountStart/serial/DeleteFirst 1.21
221 TestMountStart/serial/VerifyMountPostDelete 0.41
222 TestMountStart/serial/Stop 1.29
223 TestMountStart/serial/RestartStopped 22.45
224 TestMountStart/serial/VerifyMountPostStop 0.41
227 TestMultiNode/serial/FreshStart2Nodes 114.08
228 TestMultiNode/serial/DeployApp2Nodes 6.48
229 TestMultiNode/serial/PingHostFrom2Pods 0.84
230 TestMultiNode/serial/AddNode 52.69
231 TestMultiNode/serial/MultiNodeLabels 0.07
232 TestMultiNode/serial/ProfileList 0.66
233 TestMultiNode/serial/CopyFile 7.94
234 TestMultiNode/serial/StopNode 2.44
235 TestMultiNode/serial/StartAfterStop 40.73
236 TestMultiNode/serial/RestartKeepsNodes 381.53
237 TestMultiNode/serial/DeleteNode 2.9
238 TestMultiNode/serial/StopMultiNode 182.13
239 TestMultiNode/serial/RestartMultiNode 115.98
240 TestMultiNode/serial/ValidateNameConflict 48.08
247 TestScheduledStopUnix 117.23
251 TestRunningBinaryUpgrade 219.14
256 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
266 TestNoKubernetes/serial/StartWithK8s 99.7
274 TestNetworkPlugins/group/false 3.85
278 TestNoKubernetes/serial/StartWithStopK8s 7.55
279 TestNoKubernetes/serial/Start 52.34
280 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
281 TestNoKubernetes/serial/ProfileList 2.06
282 TestNoKubernetes/serial/Stop 1.33
283 TestNoKubernetes/serial/StartNoArgs 46.75
284 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.25
286 TestStartStop/group/no-preload/serial/FirstStart 118.37
290 TestStartStop/group/embed-certs/serial/FirstStart 69.12
291 TestStartStop/group/no-preload/serial/DeployApp 12.3
292 TestStartStop/group/embed-certs/serial/DeployApp 12.34
293 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.1
294 TestStartStop/group/no-preload/serial/Stop 91.1
295 TestStartStop/group/old-k8s-version/serial/Stop 6.31
296 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
298 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.24
299 TestStartStop/group/embed-certs/serial/Stop 91.61
300 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.24
301 TestStartStop/group/no-preload/serial/SecondStart 396.3
302 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.26
303 TestStartStop/group/embed-certs/serial/SecondStart 352.37
305 TestPause/serial/Start 82.32
307 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 11.01
308 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
309 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
310 TestStartStop/group/embed-certs/serial/Pause 3.03
311 TestStoppedBinaryUpgrade/Setup 0.64
312 TestStoppedBinaryUpgrade/Upgrade 100.61
313 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 9.01
314 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
315 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
316 TestStartStop/group/no-preload/serial/Pause 3.32
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 53.87
320 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.34
321 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.2
322 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.22
323 TestStoppedBinaryUpgrade/MinikubeLogs 1.18
325 TestStartStop/group/newest-cni/serial/FirstStart 49.31
326 TestStartStop/group/newest-cni/serial/DeployApp 0
327 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.5
328 TestStartStop/group/newest-cni/serial/Stop 7.36
329 TestNetworkPlugins/group/auto/Start 58.1
330 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
331 TestStartStop/group/newest-cni/serial/SecondStart 57.2
332 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.26
333 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 315.19
334 TestNetworkPlugins/group/auto/KubeletFlags 0.29
335 TestNetworkPlugins/group/auto/NetCatPod 10.33
336 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
339 TestStartStop/group/newest-cni/serial/Pause 3.31
340 TestNetworkPlugins/group/auto/DNS 0.2
341 TestNetworkPlugins/group/auto/Localhost 0.2
342 TestNetworkPlugins/group/auto/HairPin 0.25
343 TestNetworkPlugins/group/kindnet/Start 68.04
344 TestNetworkPlugins/group/calico/Start 91.47
345 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
346 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
347 TestNetworkPlugins/group/kindnet/NetCatPod 10.26
348 TestNetworkPlugins/group/kindnet/DNS 0.18
349 TestNetworkPlugins/group/kindnet/Localhost 0.14
350 TestNetworkPlugins/group/kindnet/HairPin 0.13
351 TestNetworkPlugins/group/custom-flannel/Start 77.3
352 TestNetworkPlugins/group/calico/ControllerPod 6.01
353 TestNetworkPlugins/group/calico/KubeletFlags 0.24
354 TestNetworkPlugins/group/calico/NetCatPod 10.32
355 TestNetworkPlugins/group/calico/DNS 0.15
356 TestNetworkPlugins/group/calico/Localhost 0.15
357 TestNetworkPlugins/group/calico/HairPin 0.13
358 TestNetworkPlugins/group/enable-default-cni/Start 59.75
359 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
360 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.25
361 TestNetworkPlugins/group/custom-flannel/DNS 0.15
362 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
363 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
364 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
365 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.26
366 TestNetworkPlugins/group/flannel/Start 72.84
367 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
368 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
369 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
370 TestNetworkPlugins/group/bridge/Start 63.61
371 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
372 TestNetworkPlugins/group/flannel/ControllerPod 6.01
373 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
374 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
375 TestNetworkPlugins/group/flannel/NetCatPod 11.28
376 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
377 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.92
378 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
379 TestNetworkPlugins/group/bridge/NetCatPod 12.32
380 TestNetworkPlugins/group/flannel/DNS 0.18
381 TestNetworkPlugins/group/flannel/Localhost 0.15
382 TestNetworkPlugins/group/flannel/HairPin 0.14
383 TestNetworkPlugins/group/bridge/DNS 0.19
384 TestNetworkPlugins/group/bridge/Localhost 0.14
385 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (10.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-577957 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-577957 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (10.19037121s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (10.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0407 12:13:34.472670 1169716 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0407 12:13:34.472776 1169716 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-577957
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-577957: exit status 85 (86.597684ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-577957 | jenkins | v1.35.0 | 07 Apr 25 12:13 UTC |          |
	|         | -p download-only-577957        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 12:13:24
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 12:13:24.337766 1169730 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:13:24.338223 1169730 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:13:24.338240 1169730 out.go:358] Setting ErrFile to fd 2...
	I0407 12:13:24.338245 1169730 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:13:24.338467 1169730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1162386/.minikube/bin
	W0407 12:13:24.338633 1169730 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20602-1162386/.minikube/config/config.json: open /home/jenkins/minikube-integration/20602-1162386/.minikube/config/config.json: no such file or directory
	I0407 12:13:24.339305 1169730 out.go:352] Setting JSON to true
	I0407 12:13:24.340587 1169730 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":14148,"bootTime":1744013856,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 12:13:24.340729 1169730 start.go:139] virtualization: kvm guest
	I0407 12:13:24.344242 1169730 out.go:97] [download-only-577957] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0407 12:13:24.344492 1169730 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball: no such file or directory
	I0407 12:13:24.344599 1169730 notify.go:220] Checking for updates...
	I0407 12:13:24.346877 1169730 out.go:169] MINIKUBE_LOCATION=20602
	I0407 12:13:24.349193 1169730 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:13:24.351716 1169730 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20602-1162386/kubeconfig
	I0407 12:13:24.354244 1169730 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1162386/.minikube
	I0407 12:13:24.356398 1169730 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0407 12:13:24.359805 1169730 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0407 12:13:24.360118 1169730 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:13:24.404222 1169730 out.go:97] Using the kvm2 driver based on user configuration
	I0407 12:13:24.404284 1169730 start.go:297] selected driver: kvm2
	I0407 12:13:24.404292 1169730 start.go:901] validating driver "kvm2" against <nil>
	I0407 12:13:24.404712 1169730 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 12:13:24.404815 1169730 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20602-1162386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0407 12:13:24.423600 1169730 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0407 12:13:24.423677 1169730 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 12:13:24.424425 1169730 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0407 12:13:24.424615 1169730 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0407 12:13:24.424657 1169730 cni.go:84] Creating CNI manager for ""
	I0407 12:13:24.424714 1169730 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 12:13:24.424724 1169730 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0407 12:13:24.424788 1169730 start.go:340] cluster config:
	{Name:download-only-577957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-577957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:13:24.425002 1169730 iso.go:125] acquiring lock: {Name:mk51e1827709f7a3810dbd898083f8185ece65eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 12:13:24.427823 1169730 out.go:97] Downloading VM boot image ...
	I0407 12:13:24.427895 1169730 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0407 12:13:28.465569 1169730 out.go:97] Starting "download-only-577957" primary control-plane node in "download-only-577957" cluster
	I0407 12:13:28.465628 1169730 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0407 12:13:28.495778 1169730 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0407 12:13:28.495841 1169730 cache.go:56] Caching tarball of preloaded images
	I0407 12:13:28.496050 1169730 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0407 12:13:28.498524 1169730 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0407 12:13:28.498576 1169730 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0407 12:13:28.523506 1169730 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0407 12:13:32.800289 1169730 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0407 12:13:32.800435 1169730 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0407 12:13:33.798544 1169730 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0407 12:13:33.798986 1169730 profile.go:143] Saving config to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/download-only-577957/config.json ...
	I0407 12:13:33.799035 1169730 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/download-only-577957/config.json: {Name:mka813eac32e3bf32e3ec5bd9ea6fbc23be0c119 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:13:33.799258 1169730 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0407 12:13:33.799487 1169730 download.go:108] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-577957 host does not exist
	  To start a cluster, run: "minikube start -p download-only-577957"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-577957
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/json-events (5.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-170875 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-170875 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.142929076s)
--- PASS: TestDownloadOnly/v1.32.2/json-events (5.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/preload-exists
I0407 12:13:40.044528 1169716 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
I0407 12:13:40.044577 1169716 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-170875
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-170875: exit status 85 (83.915218ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-577957 | jenkins | v1.35.0 | 07 Apr 25 12:13 UTC |                     |
	|         | -p download-only-577957        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 07 Apr 25 12:13 UTC | 07 Apr 25 12:13 UTC |
	| delete  | -p download-only-577957        | download-only-577957 | jenkins | v1.35.0 | 07 Apr 25 12:13 UTC | 07 Apr 25 12:13 UTC |
	| start   | -o=json --download-only        | download-only-170875 | jenkins | v1.35.0 | 07 Apr 25 12:13 UTC |                     |
	|         | -p download-only-170875        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 12:13:34
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 12:13:34.952346 1169926 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:13:34.952484 1169926 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:13:34.952492 1169926 out.go:358] Setting ErrFile to fd 2...
	I0407 12:13:34.952498 1169926 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:13:34.952851 1169926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1162386/.minikube/bin
	I0407 12:13:34.953645 1169926 out.go:352] Setting JSON to true
	I0407 12:13:34.954878 1169926 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":14159,"bootTime":1744013856,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 12:13:34.954971 1169926 start.go:139] virtualization: kvm guest
	I0407 12:13:34.957836 1169926 out.go:97] [download-only-170875] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 12:13:34.958130 1169926 notify.go:220] Checking for updates...
	I0407 12:13:34.960351 1169926 out.go:169] MINIKUBE_LOCATION=20602
	I0407 12:13:34.963073 1169926 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:13:34.966063 1169926 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20602-1162386/kubeconfig
	I0407 12:13:34.968210 1169926 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1162386/.minikube
	I0407 12:13:34.970569 1169926 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0407 12:13:34.974611 1169926 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0407 12:13:34.974933 1169926 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:13:35.017817 1169926 out.go:97] Using the kvm2 driver based on user configuration
	I0407 12:13:35.017883 1169926 start.go:297] selected driver: kvm2
	I0407 12:13:35.017912 1169926 start.go:901] validating driver "kvm2" against <nil>
	I0407 12:13:35.018505 1169926 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 12:13:35.018659 1169926 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20602-1162386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0407 12:13:35.040161 1169926 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0407 12:13:35.040251 1169926 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 12:13:35.040802 1169926 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0407 12:13:35.040981 1169926 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0407 12:13:35.041024 1169926 cni.go:84] Creating CNI manager for ""
	I0407 12:13:35.041056 1169926 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 12:13:35.041063 1169926 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0407 12:13:35.041118 1169926 start.go:340] cluster config:
	{Name:download-only-170875 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:download-only-170875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:13:35.041243 1169926 iso.go:125] acquiring lock: {Name:mk51e1827709f7a3810dbd898083f8185ece65eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 12:13:35.043847 1169926 out.go:97] Starting "download-only-170875" primary control-plane node in "download-only-170875" cluster
	I0407 12:13:35.043896 1169926 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 12:13:35.122853 1169926 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0407 12:13:35.122990 1169926 cache.go:56] Caching tarball of preloaded images
	I0407 12:13:35.123196 1169926 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 12:13:35.125881 1169926 out.go:97] Downloading Kubernetes v1.32.2 preload ...
	I0407 12:13:35.125931 1169926 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 ...
	I0407 12:13:35.161339 1169926 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:a1ce605168a895ad5f3b3c8db1fe4d66 -> /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0407 12:13:38.339060 1169926 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 ...
	I0407 12:13:38.339188 1169926 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 ...
	I0407 12:13:39.277470 1169926 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0407 12:13:39.277862 1169926 profile.go:143] Saving config to /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/download-only-170875/config.json ...
	I0407 12:13:39.277916 1169926 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/download-only-170875/config.json: {Name:mkc66735fd227f1244342f60319b683d2f7305fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:13:39.278118 1169926 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 12:13:39.278274 1169926 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/20602-1162386/.minikube/cache/linux/amd64/v1.32.2/kubectl
	
	
	* The control-plane node download-only-170875 host does not exist
	  To start a cluster, run: "minikube start -p download-only-170875"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAll (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.2/DeleteAll (0.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-170875
--- PASS: TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.18s)

                                                
                                    
x
+
TestBinaryMirror (0.71s)

                                                
                                                
=== RUN   TestBinaryMirror
I0407 12:13:40.831163 1169716 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-592205 --alsologtostderr --binary-mirror http://127.0.0.1:33293 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-592205" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-592205
--- PASS: TestBinaryMirror (0.71s)

                                                
                                    
x
+
TestOffline (88.69s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-011278 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-011278 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m27.560478137s)
helpers_test.go:175: Cleaning up "offline-crio-011278" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-011278
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-011278: (1.129595968s)
--- PASS: TestOffline (88.69s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-660533
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-660533: exit status 85 (70.468057ms)

                                                
                                                
-- stdout --
	* Profile "addons-660533" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-660533"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-660533
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-660533: exit status 85 (71.418343ms)

                                                
                                                
-- stdout --
	* Profile "addons-660533" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-660533"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (202.55s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-660533 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-660533 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m22.549751517s)
--- PASS: TestAddons/Setup (202.55s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-660533 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-660533 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.62s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-660533 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-660533 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e3da275a-a9b2-4918-9b3b-461b094d63cd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e3da275a-a9b2-4918-9b3b-461b094d63cd] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.004649478s
addons_test.go:633: (dbg) Run:  kubectl --context addons-660533 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-660533 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-660533 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.62s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 5.611269ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-cbcx8" [e8f82417-0cbb-4261-b17c-98dd81f33a21] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005483418s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-jjr6j" [c51e7db8-05e9-4bf1-8b27-d01380c2388b] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004381097s
addons_test.go:331: (dbg) Run:  kubectl --context addons-660533 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-660533 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-660533 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.459182401s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-660533 ip
2025/04/07 12:17:41 [DEBUG] GET http://192.168.39.112:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-660533 addons disable registry --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-660533 addons disable registry --alsologtostderr -v=1: (1.260806433s)
--- PASS: TestAddons/parallel/Registry (17.92s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.94s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-hkv5c" [f52be647-1827-453b-a355-bd29a35a6564] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004767134s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-660533 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-660533 addons disable inspektor-gadget --alsologtostderr -v=1: (5.932095981s)
--- PASS: TestAddons/parallel/InspektorGadget (11.94s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.52s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 5.194726ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
I0407 12:17:24.981293 1169716 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0407 12:17:24.981327 1169716 kapi.go:107] duration metric: took 6.700126ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
helpers_test.go:344: "metrics-server-7fbb699795-hrk7g" [eee024d4-c7d8-46c7-82e3-d5ad8e36eccb] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005458861s
addons_test.go:402: (dbg) Run:  kubectl --context addons-660533 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-660533 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-660533 addons disable metrics-server --alsologtostderr -v=1: (1.421402643s)
--- PASS: TestAddons/parallel/MetricsServer (6.52s)

                                                
                                    
x
+
TestAddons/parallel/CSI (69.99s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:488: csi-hostpath-driver pods stabilized in 6.71177ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-660533 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-660533 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [73bee7f1-52a5-47a1-a12f-9eadf165b611] Pending
helpers_test.go:344: "task-pv-pod" [73bee7f1-52a5-47a1-a12f-9eadf165b611] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [73bee7f1-52a5-47a1-a12f-9eadf165b611] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.003319028s
addons_test.go:511: (dbg) Run:  kubectl --context addons-660533 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-660533 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-660533 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-660533 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-660533 delete pod task-pv-pod: (1.73023655s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-660533 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-660533 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-660533 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [6ce69134-1e55-4b86-8f23-2b447d7ebd75] Pending
helpers_test.go:344: "task-pv-pod-restore" [6ce69134-1e55-4b86-8f23-2b447d7ebd75] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [6ce69134-1e55-4b86-8f23-2b447d7ebd75] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004432789s
addons_test.go:553: (dbg) Run:  kubectl --context addons-660533 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-660533 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-660533 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-660533 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-660533 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-660533 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.121063022s)
--- PASS: TestAddons/parallel/CSI (69.99s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-660533 --alsologtostderr -v=1
I0407 12:17:24.974646 1169716 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:747: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-660533 --alsologtostderr -v=1: (1.0801443s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-424hk" [234961b5-5ca7-4dfc-a61c-8bbb2ad904cd] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-424hk" [234961b5-5ca7-4dfc-a61c-8bbb2ad904cd] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-424hk" [234961b5-5ca7-4dfc-a61c-8bbb2ad904cd] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004836769s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-660533 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-660533 addons disable headlamp --alsologtostderr -v=1: (6.531340545s)
--- PASS: TestAddons/parallel/Headlamp (20.62s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.66s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7dc7f9b5b8-7jf9m" [3462924b-f491-454d-83dd-4e10d4143e06] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.007481668s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-660533 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.66s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (63.74s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-660533 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-660533 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-660533 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [51772c27-a5ed-4b28-9e4f-feb276b6f5f6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [51772c27-a5ed-4b28-9e4f-feb276b6f5f6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [51772c27-a5ed-4b28-9e4f-feb276b6f5f6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 9.004734s
addons_test.go:906: (dbg) Run:  kubectl --context addons-660533 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-660533 ssh "cat /opt/local-path-provisioner/pvc-9c6cc59a-2996-4d4d-8cfd-22882b3ed36f_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-660533 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-660533 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-660533 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-660533 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.746523807s)
--- PASS: TestAddons/parallel/LocalPath (63.74s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.64s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-rds5h" [b2520ae3-ad27-4503-9c27-7aff9b16771f] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004412982s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-660533 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.64s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-lvzzs" [f83d667f-df46-4240-8e00-2c3d5d809ae5] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00484543s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-660533 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-660533 addons disable yakd --alsologtostderr -v=1: (6.407196303s)
--- PASS: TestAddons/parallel/Yakd (11.41s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.41s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-660533
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-660533: (1m31.074333521s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-660533
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-660533
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-660533
--- PASS: TestAddons/StoppedEnableDisable (91.41s)

                                                
                                    
x
+
TestCertOptions (66.97s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-919040 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-919040 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m5.402380797s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-919040 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-919040 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-919040 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-919040" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-919040
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-919040: (1.059666072s)
--- PASS: TestCertOptions (66.97s)

                                                
                                    
x
+
TestCertExpiration (262.49s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-950320 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-950320 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (48.67087722s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-950320 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-950320 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (32.149008606s)
helpers_test.go:175: Cleaning up "cert-expiration-950320" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-950320
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-950320: (1.664830342s)
--- PASS: TestCertExpiration (262.49s)

                                                
                                    
x
+
TestForceSystemdFlag (53.24s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-226463 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-226463 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (52.141915505s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-226463 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-226463" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-226463
--- PASS: TestForceSystemdFlag (53.24s)

                                                
                                    
x
+
TestForceSystemdEnv (46.76s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-538369 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-538369 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (45.70683956s)
helpers_test.go:175: Cleaning up "force-systemd-env-538369" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-538369
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-538369: (1.052514137s)
--- PASS: TestForceSystemdEnv (46.76s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.11s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0407 13:30:43.513689 1169716 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0407 13:30:43.513961 1169716 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0407 13:30:43.549837 1169716 install.go:62] docker-machine-driver-kvm2: exit status 1
W0407 13:30:43.550080 1169716 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0407 13:30:43.550163 1169716 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3324472737/001/docker-machine-driver-kvm2
I0407 13:30:43.828025 1169716 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3324472737/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc000791da8 gz:0xc000791e30 tar:0xc000791de0 tar.bz2:0xc000791df0 tar.gz:0xc000791e00 tar.xz:0xc000791e10 tar.zst:0xc000791e20 tbz2:0xc000791df0 tgz:0xc000791e00 txz:0xc000791e10 tzst:0xc000791e20 xz:0xc000791e38 zip:0xc000791e40 zst:0xc000791e50] Getters:map[file:0xc0018833a0 http:0xc00221cd20 https:0xc00221cd70] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0407 13:30:43.828108 1169716 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3324472737/001/docker-machine-driver-kvm2
I0407 13:30:45.812138 1169716 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0407 13:30:45.812250 1169716 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0407 13:30:45.845981 1169716 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0407 13:30:45.846021 1169716 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0407 13:30:45.846090 1169716 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0407 13:30:45.846117 1169716 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3324472737/002/docker-machine-driver-kvm2
I0407 13:30:45.898856 1169716 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3324472737/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc000791da8 gz:0xc000791e30 tar:0xc000791de0 tar.bz2:0xc000791df0 tar.gz:0xc000791e00 tar.xz:0xc000791e10 tar.zst:0xc000791e20 tbz2:0xc000791df0 tgz:0xc000791e00 txz:0xc000791e10 tzst:0xc000791e20 xz:0xc000791e38 zip:0xc000791e40 zst:0xc000791e50] Getters:map[file:0xc000c84ac0 http:0xc0008c44b0 https:0xc0008c4500] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0407 13:30:45.898908 1169716 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3324472737/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.11s)

                                                
                                    
x
+
TestErrorSpam/setup (43.18s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-777756 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-777756 --driver=kvm2  --container-runtime=crio
E0407 12:22:04.919567 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:22:04.926171 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:22:04.937938 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:22:04.959656 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:22:05.001415 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:22:05.083068 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:22:05.244778 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:22:05.566631 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:22:06.208896 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:22:07.491069 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:22:10.054151 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:22:15.176089 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:22:25.418399 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-777756 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-777756 --driver=kvm2  --container-runtime=crio: (43.179268363s)
--- PASS: TestErrorSpam/setup (43.18s)

                                                
                                    
x
+
TestErrorSpam/start (0.41s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-777756 --log_dir /tmp/nospam-777756 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-777756 --log_dir /tmp/nospam-777756 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-777756 --log_dir /tmp/nospam-777756 start --dry-run
--- PASS: TestErrorSpam/start (0.41s)

                                                
                                    
x
+
TestErrorSpam/status (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-777756 --log_dir /tmp/nospam-777756 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-777756 --log_dir /tmp/nospam-777756 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-777756 --log_dir /tmp/nospam-777756 status
--- PASS: TestErrorSpam/status (0.78s)

                                                
                                    
x
+
TestErrorSpam/pause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-777756 --log_dir /tmp/nospam-777756 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-777756 --log_dir /tmp/nospam-777756 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-777756 --log_dir /tmp/nospam-777756 pause
--- PASS: TestErrorSpam/pause (1.72s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.94s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-777756 --log_dir /tmp/nospam-777756 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-777756 --log_dir /tmp/nospam-777756 unpause
E0407 12:22:45.900317 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-777756 --log_dir /tmp/nospam-777756 unpause
--- PASS: TestErrorSpam/unpause (1.94s)

                                                
                                    
x
+
TestErrorSpam/stop (4.85s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-777756 --log_dir /tmp/nospam-777756 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-777756 --log_dir /tmp/nospam-777756 stop: (1.684701357s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-777756 --log_dir /tmp/nospam-777756 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-777756 --log_dir /tmp/nospam-777756 stop: (1.182497616s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-777756 --log_dir /tmp/nospam-777756 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-777756 --log_dir /tmp/nospam-777756 stop: (1.985001183s)
--- PASS: TestErrorSpam/stop (4.85s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20602-1162386/.minikube/files/etc/test/nested/copy/1169716/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (57.02s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-728898 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0407 12:23:26.863577 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-728898 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (57.016589091s)
--- PASS: TestFunctional/serial/StartWithProxy (57.02s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.69s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0407 12:23:49.006446 1169716 config.go:182] Loaded profile config "functional-728898": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-728898 --alsologtostderr -v=8
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-728898 --alsologtostderr -v=8: (33.687668436s)
functional_test.go:680: soft start took 33.688520326s for "functional-728898" cluster.
I0407 12:24:22.694498 1169716 config.go:182] Loaded profile config "functional-728898": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/SoftStart (33.69s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-728898 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-728898 cache add registry.k8s.io/pause:3.1: (1.084747265s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-728898 cache add registry.k8s.io/pause:3.3: (1.283543097s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-728898 cache add registry.k8s.io/pause:latest: (1.313894431s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-728898 /tmp/TestFunctionalserialCacheCmdcacheadd_local1654266662/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 cache add minikube-local-cache-test:functional-728898
functional_test.go:1106: (dbg) Done: out/minikube-linux-amd64 -p functional-728898 cache add minikube-local-cache-test:functional-728898: (1.785412393s)
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 cache delete minikube-local-cache-test:functional-728898
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-728898
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728898 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (255.867082ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 cache reload
functional_test.go:1175: (dbg) Done: out/minikube-linux-amd64 -p functional-728898 cache reload: (1.243993003s)
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 kubectl -- --context functional-728898 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-728898 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (389.39s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-728898 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0407 12:24:48.788424 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:27:04.919692 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:27:32.636883 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-728898 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (6m29.391069472s)
functional_test.go:778: restart took 6m29.391256626s for "functional-728898" cluster.
I0407 12:31:00.876156 1169716 config.go:182] Loaded profile config "functional-728898": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/ExtraConfig (389.39s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-728898 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-728898 logs: (1.463363851s)
--- PASS: TestFunctional/serial/LogsCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 logs --file /tmp/TestFunctionalserialLogsFileCmd3182677081/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-728898 logs --file /tmp/TestFunctionalserialLogsFileCmd3182677081/001/logs.txt: (1.544111674s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.55s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.73s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-728898 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-728898
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-728898: exit status 115 (387.344989ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.151:32589 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-728898 delete -f testdata/invalidsvc.yaml
functional_test.go:2344: (dbg) Done: kubectl --context functional-728898 delete -f testdata/invalidsvc.yaml: (1.081982591s)
--- PASS: TestFunctional/serial/InvalidService (4.73s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728898 config get cpus: exit status 14 (105.804287ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728898 config get cpus: exit status 14 (72.523667ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (140.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-728898 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-728898 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1180039: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (140.76s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-728898 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-728898 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (189.462642ms)

                                                
                                                
-- stdout --
	* [functional-728898] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20602-1162386/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1162386/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 12:31:27.398444 1179599 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:31:27.398568 1179599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:31:27.398579 1179599 out.go:358] Setting ErrFile to fd 2...
	I0407 12:31:27.398584 1179599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:31:27.398868 1179599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1162386/.minikube/bin
	I0407 12:31:27.399712 1179599 out.go:352] Setting JSON to false
	I0407 12:31:27.401219 1179599 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":15231,"bootTime":1744013856,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 12:31:27.401328 1179599 start.go:139] virtualization: kvm guest
	I0407 12:31:27.404909 1179599 out.go:177] * [functional-728898] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 12:31:27.406776 1179599 out.go:177]   - MINIKUBE_LOCATION=20602
	I0407 12:31:27.406785 1179599 notify.go:220] Checking for updates...
	I0407 12:31:27.408834 1179599 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:31:27.411126 1179599 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20602-1162386/kubeconfig
	I0407 12:31:27.412941 1179599 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1162386/.minikube
	I0407 12:31:27.414436 1179599 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 12:31:27.415912 1179599 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 12:31:27.417835 1179599 config.go:182] Loaded profile config "functional-728898": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 12:31:27.418504 1179599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:31:27.418633 1179599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:31:27.438774 1179599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44033
	I0407 12:31:27.439438 1179599 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:31:27.440241 1179599 main.go:141] libmachine: Using API Version  1
	I0407 12:31:27.440278 1179599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:31:27.440819 1179599 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:31:27.441080 1179599 main.go:141] libmachine: (functional-728898) Calling .DriverName
	I0407 12:31:27.441401 1179599 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:31:27.441793 1179599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:31:27.441865 1179599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:31:27.462141 1179599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36887
	I0407 12:31:27.462750 1179599 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:31:27.463423 1179599 main.go:141] libmachine: Using API Version  1
	I0407 12:31:27.463465 1179599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:31:27.464040 1179599 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:31:27.464372 1179599 main.go:141] libmachine: (functional-728898) Calling .DriverName
	I0407 12:31:27.511904 1179599 out.go:177] * Using the kvm2 driver based on existing profile
	I0407 12:31:27.513567 1179599 start.go:297] selected driver: kvm2
	I0407 12:31:27.513600 1179599 start.go:901] validating driver "kvm2" against &{Name:functional-728898 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-728898 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.151 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:31:27.513802 1179599 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 12:31:27.517493 1179599 out.go:201] 
	W0407 12:31:27.519700 1179599 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0407 12:31:27.521515 1179599 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-728898 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-728898 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-728898 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (205.584741ms)

                                                
                                                
-- stdout --
	* [functional-728898] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20602-1162386/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1162386/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 12:31:27.823336 1179726 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:31:27.823511 1179726 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:31:27.823530 1179726 out.go:358] Setting ErrFile to fd 2...
	I0407 12:31:27.823537 1179726 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:31:27.823874 1179726 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1162386/.minikube/bin
	I0407 12:31:27.824512 1179726 out.go:352] Setting JSON to false
	I0407 12:31:27.825851 1179726 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":15232,"bootTime":1744013856,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 12:31:27.825986 1179726 start.go:139] virtualization: kvm guest
	I0407 12:31:27.828807 1179726 out.go:177] * [functional-728898] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0407 12:31:27.831661 1179726 notify.go:220] Checking for updates...
	I0407 12:31:27.831689 1179726 out.go:177]   - MINIKUBE_LOCATION=20602
	I0407 12:31:27.835776 1179726 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:31:27.839208 1179726 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20602-1162386/kubeconfig
	I0407 12:31:27.841222 1179726 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1162386/.minikube
	I0407 12:31:27.843251 1179726 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 12:31:27.845401 1179726 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 12:31:27.847889 1179726 config.go:182] Loaded profile config "functional-728898": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 12:31:27.848422 1179726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:31:27.848495 1179726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:31:27.872082 1179726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43461
	I0407 12:31:27.872734 1179726 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:31:27.873397 1179726 main.go:141] libmachine: Using API Version  1
	I0407 12:31:27.873425 1179726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:31:27.873935 1179726 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:31:27.874322 1179726 main.go:141] libmachine: (functional-728898) Calling .DriverName
	I0407 12:31:27.874754 1179726 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:31:27.875323 1179726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:31:27.875388 1179726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:31:27.897668 1179726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35575
	I0407 12:31:27.898307 1179726 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:31:27.899292 1179726 main.go:141] libmachine: Using API Version  1
	I0407 12:31:27.899465 1179726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:31:27.900064 1179726 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:31:27.900462 1179726 main.go:141] libmachine: (functional-728898) Calling .DriverName
	I0407 12:31:27.944079 1179726 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0407 12:31:27.946374 1179726 start.go:297] selected driver: kvm2
	I0407 12:31:27.946409 1179726 start.go:901] validating driver "kvm2" against &{Name:functional-728898 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-728898 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.151 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:31:27.946543 1179726 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 12:31:27.949744 1179726 out.go:201] 
	W0407 12:31:27.952054 1179726 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0407 12:31:27.953828 1179726 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-728898 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-728898 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-xlzk8" [238652ef-7f48-42be-ab1b-58c79e679abd] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-xlzk8" [238652ef-7f48-42be-ab1b-58c79e679abd] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.005487077s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.39.151:31129
functional_test.go:1692: http://192.168.39.151:31129: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-xlzk8

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.151:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.151:31129
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.78s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 ssh -n functional-728898 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 cp functional-728898:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd141364228/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 ssh -n functional-728898 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 ssh -n functional-728898 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/1169716/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 ssh "sudo cat /etc/test/nested/copy/1169716/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/1169716.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 ssh "sudo cat /etc/ssl/certs/1169716.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/1169716.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 ssh "sudo cat /usr/share/ca-certificates/1169716.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/11697162.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 ssh "sudo cat /etc/ssl/certs/11697162.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/11697162.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 ssh "sudo cat /usr/share/ca-certificates/11697162.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-728898 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728898 ssh "sudo systemctl is-active docker": exit status 1 (334.745034ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 ssh "sudo systemctl is-active containerd"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728898 ssh "sudo systemctl is-active containerd": exit status 1 (311.978095ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-728898 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-728898
localhost/kicbase/echo-server:functional-728898
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20241212-9f82dd49
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-728898 image ls --format short --alsologtostderr:
I0407 12:32:27.193243 1180699 out.go:345] Setting OutFile to fd 1 ...
I0407 12:32:27.193430 1180699 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:32:27.193446 1180699 out.go:358] Setting ErrFile to fd 2...
I0407 12:32:27.193453 1180699 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:32:27.193869 1180699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1162386/.minikube/bin
I0407 12:32:27.194678 1180699 config.go:182] Loaded profile config "functional-728898": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0407 12:32:27.194854 1180699 config.go:182] Loaded profile config "functional-728898": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0407 12:32:27.195360 1180699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0407 12:32:27.195458 1180699 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 12:32:27.213553 1180699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44819
I0407 12:32:27.214323 1180699 main.go:141] libmachine: () Calling .GetVersion
I0407 12:32:27.215186 1180699 main.go:141] libmachine: Using API Version  1
I0407 12:32:27.215222 1180699 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 12:32:27.215724 1180699 main.go:141] libmachine: () Calling .GetMachineName
I0407 12:32:27.216016 1180699 main.go:141] libmachine: (functional-728898) Calling .GetState
I0407 12:32:27.218546 1180699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0407 12:32:27.218609 1180699 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 12:32:27.235696 1180699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33673
I0407 12:32:27.236322 1180699 main.go:141] libmachine: () Calling .GetVersion
I0407 12:32:27.236962 1180699 main.go:141] libmachine: Using API Version  1
I0407 12:32:27.236998 1180699 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 12:32:27.237478 1180699 main.go:141] libmachine: () Calling .GetMachineName
I0407 12:32:27.237742 1180699 main.go:141] libmachine: (functional-728898) Calling .DriverName
I0407 12:32:27.238072 1180699 ssh_runner.go:195] Run: systemctl --version
I0407 12:32:27.238110 1180699 main.go:141] libmachine: (functional-728898) Calling .GetSSHHostname
I0407 12:32:27.244019 1180699 main.go:141] libmachine: (functional-728898) DBG | domain functional-728898 has defined MAC address 52:54:00:97:5b:47 in network mk-functional-728898
I0407 12:32:27.244920 1180699 main.go:141] libmachine: (functional-728898) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:5b:47", ip: ""} in network mk-functional-728898: {Iface:virbr1 ExpiryTime:2025-04-07 13:23:07 +0000 UTC Type:0 Mac:52:54:00:97:5b:47 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:functional-728898 Clientid:01:52:54:00:97:5b:47}
I0407 12:32:27.244962 1180699 main.go:141] libmachine: (functional-728898) DBG | domain functional-728898 has defined IP address 192.168.39.151 and MAC address 52:54:00:97:5b:47 in network mk-functional-728898
I0407 12:32:27.245382 1180699 main.go:141] libmachine: (functional-728898) Calling .GetSSHPort
I0407 12:32:27.245875 1180699 main.go:141] libmachine: (functional-728898) Calling .GetSSHKeyPath
I0407 12:32:27.246322 1180699 main.go:141] libmachine: (functional-728898) Calling .GetSSHUsername
I0407 12:32:27.246657 1180699 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/functional-728898/id_rsa Username:docker}
I0407 12:32:27.333442 1180699 ssh_runner.go:195] Run: sudo crictl images --output json
I0407 12:32:27.377306 1180699 main.go:141] libmachine: Making call to close driver server
I0407 12:32:27.377327 1180699 main.go:141] libmachine: (functional-728898) Calling .Close
I0407 12:32:27.377699 1180699 main.go:141] libmachine: Successfully made call to close driver server
I0407 12:32:27.377732 1180699 main.go:141] libmachine: (functional-728898) DBG | Closing plugin on server side
I0407 12:32:27.377747 1180699 main.go:141] libmachine: Making call to close connection to plugin binary
I0407 12:32:27.377837 1180699 main.go:141] libmachine: Making call to close driver server
I0407 12:32:27.377864 1180699 main.go:141] libmachine: (functional-728898) Calling .Close
I0407 12:32:27.378215 1180699 main.go:141] libmachine: (functional-728898) DBG | Closing plugin on server side
I0407 12:32:27.378389 1180699 main.go:141] libmachine: Successfully made call to close driver server
I0407 12:32:27.378425 1180699 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-728898 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-proxy              | v1.32.2            | f1332858868e1 | 95.3MB |
| docker.io/kindest/kindnetd              | v20241212-9f82dd49 | d300845f67aeb | 95.7MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/etcd                    | 3.5.16-0           | a9e7e6b294baf | 151MB  |
| registry.k8s.io/kube-apiserver          | v1.32.2            | 85b7a174738ba | 98.1MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-728898  | ba7fef586dc97 | 3.33kB |
| localhost/my-image                      | functional-728898  | c9f3bc4f52c60 | 1.47MB |
| registry.k8s.io/kube-controller-manager | v1.32.2            | b6a454c5a800d | 90.8MB |
| registry.k8s.io/kube-scheduler          | v1.32.2            | d8e673e7c9983 | 70.7MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| docker.io/library/nginx                 | alpine             | 1ff4bb4faebcf | 49.3MB |
| docker.io/library/nginx                 | latest             | 53a18edff8091 | 196MB  |
| localhost/kicbase/echo-server           | functional-728898  | 9056ab77afb8e | 4.94MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-728898 image ls --format table --alsologtostderr:
I0407 12:32:31.971009 1180865 out.go:345] Setting OutFile to fd 1 ...
I0407 12:32:31.971174 1180865 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:32:31.971185 1180865 out.go:358] Setting ErrFile to fd 2...
I0407 12:32:31.971190 1180865 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:32:31.971541 1180865 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1162386/.minikube/bin
I0407 12:32:31.972382 1180865 config.go:182] Loaded profile config "functional-728898": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0407 12:32:31.972563 1180865 config.go:182] Loaded profile config "functional-728898": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0407 12:32:31.973108 1180865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0407 12:32:31.973208 1180865 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 12:32:31.990669 1180865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46097
I0407 12:32:31.991326 1180865 main.go:141] libmachine: () Calling .GetVersion
I0407 12:32:31.991999 1180865 main.go:141] libmachine: Using API Version  1
I0407 12:32:31.992031 1180865 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 12:32:31.992488 1180865 main.go:141] libmachine: () Calling .GetMachineName
I0407 12:32:31.992746 1180865 main.go:141] libmachine: (functional-728898) Calling .GetState
I0407 12:32:31.995616 1180865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0407 12:32:31.995692 1180865 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 12:32:32.013351 1180865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37907
I0407 12:32:32.013971 1180865 main.go:141] libmachine: () Calling .GetVersion
I0407 12:32:32.014492 1180865 main.go:141] libmachine: Using API Version  1
I0407 12:32:32.014519 1180865 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 12:32:32.015000 1180865 main.go:141] libmachine: () Calling .GetMachineName
I0407 12:32:32.015306 1180865 main.go:141] libmachine: (functional-728898) Calling .DriverName
I0407 12:32:32.015608 1180865 ssh_runner.go:195] Run: systemctl --version
I0407 12:32:32.015638 1180865 main.go:141] libmachine: (functional-728898) Calling .GetSSHHostname
I0407 12:32:32.019790 1180865 main.go:141] libmachine: (functional-728898) DBG | domain functional-728898 has defined MAC address 52:54:00:97:5b:47 in network mk-functional-728898
I0407 12:32:32.020441 1180865 main.go:141] libmachine: (functional-728898) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:5b:47", ip: ""} in network mk-functional-728898: {Iface:virbr1 ExpiryTime:2025-04-07 13:23:07 +0000 UTC Type:0 Mac:52:54:00:97:5b:47 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:functional-728898 Clientid:01:52:54:00:97:5b:47}
I0407 12:32:32.020482 1180865 main.go:141] libmachine: (functional-728898) DBG | domain functional-728898 has defined IP address 192.168.39.151 and MAC address 52:54:00:97:5b:47 in network mk-functional-728898
I0407 12:32:32.020727 1180865 main.go:141] libmachine: (functional-728898) Calling .GetSSHPort
I0407 12:32:32.021068 1180865 main.go:141] libmachine: (functional-728898) Calling .GetSSHKeyPath
I0407 12:32:32.021370 1180865 main.go:141] libmachine: (functional-728898) Calling .GetSSHUsername
I0407 12:32:32.021610 1180865 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/functional-728898/id_rsa Username:docker}
I0407 12:32:32.105474 1180865 ssh_runner.go:195] Run: sudo crictl images --output json
I0407 12:32:32.145351 1180865 main.go:141] libmachine: Making call to close driver server
I0407 12:32:32.145378 1180865 main.go:141] libmachine: (functional-728898) Calling .Close
I0407 12:32:32.145778 1180865 main.go:141] libmachine: Successfully made call to close driver server
I0407 12:32:32.145811 1180865 main.go:141] libmachine: Making call to close connection to plugin binary
I0407 12:32:32.145818 1180865 main.go:141] libmachine: (functional-728898) DBG | Closing plugin on server side
I0407 12:32:32.145826 1180865 main.go:141] libmachine: Making call to close driver server
I0407 12:32:32.145836 1180865 main.go:141] libmachine: (functional-728898) Calling .Close
I0407 12:32:32.146119 1180865 main.go:141] libmachine: Successfully made call to close driver server
I0407 12:32:32.146148 1180865 main.go:141] libmachine: Making call to close connection to plugin binary
I0407 12:32:32.146168 1180865 main.go:141] libmachine: (functional-728898) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-728898 image ls --format json --alsologtostderr:
[{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5","registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.2"],"size":"90793286"},{"id":"f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5","repoDigests":["registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d","registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.2"],"size":"95271321"},{"id":"873ed7
5102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56","repoDigests":["docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26","docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40"],"repoTags":["docker.io/kindest/kindnetd:v20241212-9f82dd49"],"size":"95714353"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/
busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry
.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"7a5c052acc1853ff175934751ed9a4123e129a6a2c526c28d5a5920975247584","repoDigests":["docker.io/library/0634882eca7efff40362113ca6f15d60ceea85761c253ed5176aa1e88819a570-tmp@sha256:fd8dfd2fc689b82d6e02b8d2b6e1f0d316910db58c0993715fa5ab429731f48b"],"repoTags":[],"size":"1466016"},{"id":"1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07","repoDigests":["docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591","docker.io/library/nginx@sha256:a71e0884a7f1192ecf5decf062b67d46b54ad63f0cc1b8aa7e705f739a97c2fc"],"repoTags":["docker.io/library/nginx:alpine"],"size":"49323988"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functio
nal-728898"],"size":"4943877"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76","registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.2"],"size":"70653254"},{"id":"53a18edff8091d5faff1e42b4d885bc5f0f897873b0b8f0ace236cd5930819b0","repoDigests":["docker.io/library/nginx@sha256:124b44bfc9ccd1f3cedf4b592d4d1e8bddb78b51ec2ed5056c52d3692baebc19","docker.io/library/nginx@sha256:54809b2f36d0ff38e8e5362b0239
779e4b75c2f19ad70ef047ed050f01506bb4"],"repoTags":["docker.io/library/nginx:latest"],"size":"196159380"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"ba7fef586dc9792a093e446b45ade355c97a832eb7db22e90c22b9d1a2b767d0","repoDigests":["localhost/minikube-local-cache-test@sha256:80c2fbb27d233460b82fef4e246f1a2bc8bb10f1ee98960a3562e900467210d1"],"repoTags":["localhost/minikube-local-cache-test:functional-728898"],"size":"3328"},{"id":"c9f3bc4f52c60bedc9934b4808932c48732829a0074e252b0bd5760b2b60b3df","repoDigests":["localhost/my-image@sha256:ca61fdd2a8788c00e1e6eb5b49bd23b0345c3ee014c481155c2e971a2948ea97"],"repoTags":["localhost/my-image:functional-728898"],"size":"1468599"},{"id":"a9e7e6b294
baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"151021823"},{"id":"85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef","repoDigests":["registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d","registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.2"],"size":"98055648"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-728898 image ls --format json --alsologtostderr:
I0407 12:32:31.744694 1180841 out.go:345] Setting OutFile to fd 1 ...
I0407 12:32:31.744834 1180841 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:32:31.744843 1180841 out.go:358] Setting ErrFile to fd 2...
I0407 12:32:31.744850 1180841 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:32:31.745063 1180841 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1162386/.minikube/bin
I0407 12:32:31.745770 1180841 config.go:182] Loaded profile config "functional-728898": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0407 12:32:31.745907 1180841 config.go:182] Loaded profile config "functional-728898": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0407 12:32:31.746315 1180841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0407 12:32:31.746393 1180841 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 12:32:31.763367 1180841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42315
I0407 12:32:31.764054 1180841 main.go:141] libmachine: () Calling .GetVersion
I0407 12:32:31.764724 1180841 main.go:141] libmachine: Using API Version  1
I0407 12:32:31.764759 1180841 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 12:32:31.765236 1180841 main.go:141] libmachine: () Calling .GetMachineName
I0407 12:32:31.765485 1180841 main.go:141] libmachine: (functional-728898) Calling .GetState
I0407 12:32:31.768070 1180841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0407 12:32:31.768133 1180841 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 12:32:31.785403 1180841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41931
I0407 12:32:31.785980 1180841 main.go:141] libmachine: () Calling .GetVersion
I0407 12:32:31.786451 1180841 main.go:141] libmachine: Using API Version  1
I0407 12:32:31.786475 1180841 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 12:32:31.786882 1180841 main.go:141] libmachine: () Calling .GetMachineName
I0407 12:32:31.787147 1180841 main.go:141] libmachine: (functional-728898) Calling .DriverName
I0407 12:32:31.787399 1180841 ssh_runner.go:195] Run: systemctl --version
I0407 12:32:31.787434 1180841 main.go:141] libmachine: (functional-728898) Calling .GetSSHHostname
I0407 12:32:31.790629 1180841 main.go:141] libmachine: (functional-728898) DBG | domain functional-728898 has defined MAC address 52:54:00:97:5b:47 in network mk-functional-728898
I0407 12:32:31.791020 1180841 main.go:141] libmachine: (functional-728898) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:5b:47", ip: ""} in network mk-functional-728898: {Iface:virbr1 ExpiryTime:2025-04-07 13:23:07 +0000 UTC Type:0 Mac:52:54:00:97:5b:47 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:functional-728898 Clientid:01:52:54:00:97:5b:47}
I0407 12:32:31.791071 1180841 main.go:141] libmachine: (functional-728898) DBG | domain functional-728898 has defined IP address 192.168.39.151 and MAC address 52:54:00:97:5b:47 in network mk-functional-728898
I0407 12:32:31.791225 1180841 main.go:141] libmachine: (functional-728898) Calling .GetSSHPort
I0407 12:32:31.791454 1180841 main.go:141] libmachine: (functional-728898) Calling .GetSSHKeyPath
I0407 12:32:31.791696 1180841 main.go:141] libmachine: (functional-728898) Calling .GetSSHUsername
I0407 12:32:31.791927 1180841 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/functional-728898/id_rsa Username:docker}
I0407 12:32:31.873068 1180841 ssh_runner.go:195] Run: sudo crictl images --output json
I0407 12:32:31.912236 1180841 main.go:141] libmachine: Making call to close driver server
I0407 12:32:31.912256 1180841 main.go:141] libmachine: (functional-728898) Calling .Close
I0407 12:32:31.912663 1180841 main.go:141] libmachine: Successfully made call to close driver server
I0407 12:32:31.912690 1180841 main.go:141] libmachine: Making call to close connection to plugin binary
I0407 12:32:31.912700 1180841 main.go:141] libmachine: Making call to close driver server
I0407 12:32:31.912708 1180841 main.go:141] libmachine: (functional-728898) Calling .Close
I0407 12:32:31.912711 1180841 main.go:141] libmachine: (functional-728898) DBG | Closing plugin on server side
I0407 12:32:31.913011 1180841 main.go:141] libmachine: (functional-728898) DBG | Closing plugin on server side
I0407 12:32:31.913049 1180841 main.go:141] libmachine: Successfully made call to close driver server
I0407 12:32:31.913071 1180841 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-728898 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d
- registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.2
size: "98055648"
- id: b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5
- registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.2
size: "90793286"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56
repoDigests:
- docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26
- docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40
repoTags:
- docker.io/kindest/kindnetd:v20241212-9f82dd49
size: "95714353"
- id: 1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07
repoDigests:
- docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591
- docker.io/library/nginx@sha256:a71e0884a7f1192ecf5decf062b67d46b54ad63f0cc1b8aa7e705f739a97c2fc
repoTags:
- docker.io/library/nginx:alpine
size: "49323988"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: ba7fef586dc9792a093e446b45ade355c97a832eb7db22e90c22b9d1a2b767d0
repoDigests:
- localhost/minikube-local-cache-test@sha256:80c2fbb27d233460b82fef4e246f1a2bc8bb10f1ee98960a3562e900467210d1
repoTags:
- localhost/minikube-local-cache-test:functional-728898
size: "3328"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 53a18edff8091d5faff1e42b4d885bc5f0f897873b0b8f0ace236cd5930819b0
repoDigests:
- docker.io/library/nginx@sha256:124b44bfc9ccd1f3cedf4b592d4d1e8bddb78b51ec2ed5056c52d3692baebc19
- docker.io/library/nginx@sha256:54809b2f36d0ff38e8e5362b0239779e4b75c2f19ad70ef047ed050f01506bb4
repoTags:
- docker.io/library/nginx:latest
size: "196159380"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "151021823"
- id: f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d
- registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d
repoTags:
- registry.k8s.io/kube-proxy:v1.32.2
size: "95271321"
- id: d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76
- registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.2
size: "70653254"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-728898
size: "4943877"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-728898 image ls --format yaml --alsologtostderr:
I0407 12:32:27.443091 1180723 out.go:345] Setting OutFile to fd 1 ...
I0407 12:32:27.443421 1180723 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:32:27.443435 1180723 out.go:358] Setting ErrFile to fd 2...
I0407 12:32:27.443441 1180723 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:32:27.443703 1180723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1162386/.minikube/bin
I0407 12:32:27.444438 1180723 config.go:182] Loaded profile config "functional-728898": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0407 12:32:27.444566 1180723 config.go:182] Loaded profile config "functional-728898": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0407 12:32:27.444983 1180723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0407 12:32:27.445053 1180723 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 12:32:27.462619 1180723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46115
I0407 12:32:27.463239 1180723 main.go:141] libmachine: () Calling .GetVersion
I0407 12:32:27.463827 1180723 main.go:141] libmachine: Using API Version  1
I0407 12:32:27.463855 1180723 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 12:32:27.464312 1180723 main.go:141] libmachine: () Calling .GetMachineName
I0407 12:32:27.464581 1180723 main.go:141] libmachine: (functional-728898) Calling .GetState
I0407 12:32:27.467003 1180723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0407 12:32:27.467058 1180723 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 12:32:27.484117 1180723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36719
I0407 12:32:27.484680 1180723 main.go:141] libmachine: () Calling .GetVersion
I0407 12:32:27.485171 1180723 main.go:141] libmachine: Using API Version  1
I0407 12:32:27.485198 1180723 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 12:32:27.485615 1180723 main.go:141] libmachine: () Calling .GetMachineName
I0407 12:32:27.485898 1180723 main.go:141] libmachine: (functional-728898) Calling .DriverName
I0407 12:32:27.486236 1180723 ssh_runner.go:195] Run: systemctl --version
I0407 12:32:27.486279 1180723 main.go:141] libmachine: (functional-728898) Calling .GetSSHHostname
I0407 12:32:27.489900 1180723 main.go:141] libmachine: (functional-728898) DBG | domain functional-728898 has defined MAC address 52:54:00:97:5b:47 in network mk-functional-728898
I0407 12:32:27.490357 1180723 main.go:141] libmachine: (functional-728898) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:5b:47", ip: ""} in network mk-functional-728898: {Iface:virbr1 ExpiryTime:2025-04-07 13:23:07 +0000 UTC Type:0 Mac:52:54:00:97:5b:47 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:functional-728898 Clientid:01:52:54:00:97:5b:47}
I0407 12:32:27.490396 1180723 main.go:141] libmachine: (functional-728898) DBG | domain functional-728898 has defined IP address 192.168.39.151 and MAC address 52:54:00:97:5b:47 in network mk-functional-728898
I0407 12:32:27.490550 1180723 main.go:141] libmachine: (functional-728898) Calling .GetSSHPort
I0407 12:32:27.490800 1180723 main.go:141] libmachine: (functional-728898) Calling .GetSSHKeyPath
I0407 12:32:27.490995 1180723 main.go:141] libmachine: (functional-728898) Calling .GetSSHUsername
I0407 12:32:27.491143 1180723 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/functional-728898/id_rsa Username:docker}
I0407 12:32:27.572788 1180723 ssh_runner.go:195] Run: sudo crictl images --output json
I0407 12:32:27.619675 1180723 main.go:141] libmachine: Making call to close driver server
I0407 12:32:27.619689 1180723 main.go:141] libmachine: (functional-728898) Calling .Close
I0407 12:32:27.620039 1180723 main.go:141] libmachine: Successfully made call to close driver server
I0407 12:32:27.620062 1180723 main.go:141] libmachine: Making call to close connection to plugin binary
I0407 12:32:27.620072 1180723 main.go:141] libmachine: Making call to close driver server
I0407 12:32:27.620080 1180723 main.go:141] libmachine: (functional-728898) Calling .Close
I0407 12:32:27.620337 1180723 main.go:141] libmachine: Successfully made call to close driver server
I0407 12:32:27.620356 1180723 main.go:141] libmachine: Making call to close connection to plugin binary
I0407 12:32:27.620375 1180723 main.go:141] libmachine: (functional-728898) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728898 ssh pgrep buildkitd: exit status 1 (212.804831ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 image build -t localhost/my-image:functional-728898 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-728898 image build -t localhost/my-image:functional-728898 testdata/build --alsologtostderr: (3.605140281s)
functional_test.go:337: (dbg) Stdout: out/minikube-linux-amd64 -p functional-728898 image build -t localhost/my-image:functional-728898 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 7a5c052acc1
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-728898
--> c9f3bc4f52c
Successfully tagged localhost/my-image:functional-728898
c9f3bc4f52c60bedc9934b4808932c48732829a0074e252b0bd5760b2b60b3df
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-728898 image build -t localhost/my-image:functional-728898 testdata/build --alsologtostderr:
I0407 12:32:27.895519 1180777 out.go:345] Setting OutFile to fd 1 ...
I0407 12:32:27.895670 1180777 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:32:27.895684 1180777 out.go:358] Setting ErrFile to fd 2...
I0407 12:32:27.895690 1180777 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:32:27.895902 1180777 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1162386/.minikube/bin
I0407 12:32:27.896541 1180777 config.go:182] Loaded profile config "functional-728898": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0407 12:32:27.897232 1180777 config.go:182] Loaded profile config "functional-728898": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0407 12:32:27.897602 1180777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0407 12:32:27.897659 1180777 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 12:32:27.915056 1180777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36851
I0407 12:32:27.915625 1180777 main.go:141] libmachine: () Calling .GetVersion
I0407 12:32:27.916271 1180777 main.go:141] libmachine: Using API Version  1
I0407 12:32:27.916318 1180777 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 12:32:27.916777 1180777 main.go:141] libmachine: () Calling .GetMachineName
I0407 12:32:27.917005 1180777 main.go:141] libmachine: (functional-728898) Calling .GetState
I0407 12:32:27.919473 1180777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0407 12:32:27.919525 1180777 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 12:32:27.936528 1180777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35531
I0407 12:32:27.937105 1180777 main.go:141] libmachine: () Calling .GetVersion
I0407 12:32:27.937826 1180777 main.go:141] libmachine: Using API Version  1
I0407 12:32:27.937880 1180777 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 12:32:27.938340 1180777 main.go:141] libmachine: () Calling .GetMachineName
I0407 12:32:27.938606 1180777 main.go:141] libmachine: (functional-728898) Calling .DriverName
I0407 12:32:27.938886 1180777 ssh_runner.go:195] Run: systemctl --version
I0407 12:32:27.938920 1180777 main.go:141] libmachine: (functional-728898) Calling .GetSSHHostname
I0407 12:32:27.943017 1180777 main.go:141] libmachine: (functional-728898) DBG | domain functional-728898 has defined MAC address 52:54:00:97:5b:47 in network mk-functional-728898
I0407 12:32:27.943640 1180777 main.go:141] libmachine: (functional-728898) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:5b:47", ip: ""} in network mk-functional-728898: {Iface:virbr1 ExpiryTime:2025-04-07 13:23:07 +0000 UTC Type:0 Mac:52:54:00:97:5b:47 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:functional-728898 Clientid:01:52:54:00:97:5b:47}
I0407 12:32:27.943684 1180777 main.go:141] libmachine: (functional-728898) DBG | domain functional-728898 has defined IP address 192.168.39.151 and MAC address 52:54:00:97:5b:47 in network mk-functional-728898
I0407 12:32:27.943836 1180777 main.go:141] libmachine: (functional-728898) Calling .GetSSHPort
I0407 12:32:27.944107 1180777 main.go:141] libmachine: (functional-728898) Calling .GetSSHKeyPath
I0407 12:32:27.944330 1180777 main.go:141] libmachine: (functional-728898) Calling .GetSSHUsername
I0407 12:32:27.944565 1180777 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/functional-728898/id_rsa Username:docker}
I0407 12:32:28.025894 1180777 build_images.go:161] Building image from path: /tmp/build.1973788362.tar
I0407 12:32:28.026080 1180777 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0407 12:32:28.039352 1180777 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1973788362.tar
I0407 12:32:28.045443 1180777 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1973788362.tar: stat -c "%s %y" /var/lib/minikube/build/build.1973788362.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1973788362.tar': No such file or directory
I0407 12:32:28.045626 1180777 ssh_runner.go:362] scp /tmp/build.1973788362.tar --> /var/lib/minikube/build/build.1973788362.tar (3072 bytes)
I0407 12:32:28.082630 1180777 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1973788362
I0407 12:32:28.099702 1180777 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1973788362 -xf /var/lib/minikube/build/build.1973788362.tar
I0407 12:32:28.114675 1180777 crio.go:315] Building image: /var/lib/minikube/build/build.1973788362
I0407 12:32:28.114769 1180777 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-728898 /var/lib/minikube/build/build.1973788362 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0407 12:32:31.413179 1180777 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-728898 /var/lib/minikube/build/build.1973788362 --cgroup-manager=cgroupfs: (3.298365365s)
I0407 12:32:31.413284 1180777 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1973788362
I0407 12:32:31.426180 1180777 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1973788362.tar
I0407 12:32:31.437919 1180777 build_images.go:217] Built localhost/my-image:functional-728898 from /tmp/build.1973788362.tar
I0407 12:32:31.438008 1180777 build_images.go:133] succeeded building to: functional-728898
I0407 12:32:31.438016 1180777 build_images.go:134] failed building to: 
I0407 12:32:31.438066 1180777 main.go:141] libmachine: Making call to close driver server
I0407 12:32:31.438083 1180777 main.go:141] libmachine: (functional-728898) Calling .Close
I0407 12:32:31.438421 1180777 main.go:141] libmachine: Successfully made call to close driver server
I0407 12:32:31.438434 1180777 main.go:141] libmachine: Making call to close connection to plugin binary
I0407 12:32:31.438450 1180777 main.go:141] libmachine: Making call to close driver server
I0407 12:32:31.438458 1180777 main.go:141] libmachine: (functional-728898) Calling .Close
I0407 12:32:31.438704 1180777 main.go:141] libmachine: Successfully made call to close driver server
I0407 12:32:31.438721 1180777 main.go:141] libmachine: Making call to close connection to plugin binary
I0407 12:32:31.438745 1180777 main.go:141] libmachine: (functional-728898) DBG | Closing plugin on server side
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:359: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.738032542s)
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-728898
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-728898 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-728898 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-hq2l4" [9cd43b15-5de0-4b18-93e0-1a20605b3d3b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-hq2l4" [9cd43b15-5de0-4b18-93e0-1a20605b3d3b] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.004621895s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 image load --daemon kicbase/echo-server:functional-728898 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-linux-amd64 -p functional-728898 image load --daemon kicbase/echo-server:functional-728898 --alsologtostderr: (3.878480086s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-728898 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-728898 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-728898 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1178437: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-728898 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-728898 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-728898 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [7a5f880a-151c-4e01-a038-26cdad8a4086] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [7a5f880a-151c-4e01-a038-26cdad8a4086] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 14.006133845s
I0407 12:31:26.114468 1169716 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 image load --daemon kicbase/echo-server:functional-728898 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-728898
functional_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 image load --daemon kicbase/echo-server:functional-728898 --alsologtostderr
functional_test.go:262: (dbg) Done: out/minikube-linux-amd64 -p functional-728898 image load --daemon kicbase/echo-server:functional-728898 --alsologtostderr: (1.217239202s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 image save kicbase/echo-server:functional-728898 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 image rm kicbase/echo-server:functional-728898 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-728898
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 image save --daemon kicbase/echo-server:functional-728898 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-728898
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 service list -o json
functional_test.go:1511: Took "601.933768ms" to run "out/minikube-linux-amd64 -p functional-728898 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.39.151:32371
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.39.151:32371
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 update-context --alsologtostderr -v=2
2025/04/07 12:33:53 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-728898 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.97.41 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-728898 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "383.653004ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "63.251506ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "331.564257ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "71.695388ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (55.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-728898 /tmp/TestFunctionalparallelMountCmdany-port3344355477/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1744029087714265853" to /tmp/TestFunctionalparallelMountCmdany-port3344355477/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1744029087714265853" to /tmp/TestFunctionalparallelMountCmdany-port3344355477/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1744029087714265853" to /tmp/TestFunctionalparallelMountCmdany-port3344355477/001/test-1744029087714265853
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728898 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (259.29581ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0407 12:31:27.973911 1169716 retry.go:31] will retry after 515.886237ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr  7 12:31 created-by-test
-rw-r--r-- 1 docker docker 24 Apr  7 12:31 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr  7 12:31 test-1744029087714265853
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 ssh cat /mount-9p/test-1744029087714265853
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-728898 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ce03bda2-e049-4c16-bc7f-d21929aa75ce] Pending
helpers_test.go:344: "busybox-mount" [ce03bda2-e049-4c16-bc7f-d21929aa75ce] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ce03bda2-e049-4c16-bc7f-d21929aa75ce] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ce03bda2-e049-4c16-bc7f-d21929aa75ce] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 53.004016356s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-728898 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-728898 /tmp/TestFunctionalparallelMountCmdany-port3344355477/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (55.70s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-728898 /tmp/TestFunctionalparallelMountCmdspecific-port2444163506/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728898 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (225.463922ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0407 12:32:23.639102 1169716 retry.go:31] will retry after 430.121892ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-728898 /tmp/TestFunctionalparallelMountCmdspecific-port2444163506/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728898 ssh "sudo umount -f /mount-9p": exit status 1 (223.841963ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-728898 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-728898 /tmp/TestFunctionalparallelMountCmdspecific-port2444163506/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-728898 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2559023673/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-728898 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2559023673/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-728898 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2559023673/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728898 ssh "findmnt -T" /mount1: exit status 1 (244.891191ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0407 12:32:25.421836 1169716 retry.go:31] will retry after 415.608921ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-728898 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-728898 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-728898 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2559023673/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-728898 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2559023673/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-728898 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2559023673/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.37s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-728898
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-728898
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-728898
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (196.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-547392 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0407 12:42:04.915902 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-547392 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m15.669787827s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (196.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-547392 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-547392 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-547392 -- rollout status deployment/busybox: (5.151130313s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-547392 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-547392 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-547392 -- exec busybox-58667487b6-2blb9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-547392 -- exec busybox-58667487b6-ct9xd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-547392 -- exec busybox-58667487b6-zrwsm -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-547392 -- exec busybox-58667487b6-2blb9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-547392 -- exec busybox-58667487b6-ct9xd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-547392 -- exec busybox-58667487b6-zrwsm -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-547392 -- exec busybox-58667487b6-2blb9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-547392 -- exec busybox-58667487b6-ct9xd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-547392 -- exec busybox-58667487b6-zrwsm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-547392 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-547392 -- exec busybox-58667487b6-2blb9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-547392 -- exec busybox-58667487b6-2blb9 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-547392 -- exec busybox-58667487b6-ct9xd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-547392 -- exec busybox-58667487b6-ct9xd -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-547392 -- exec busybox-58667487b6-zrwsm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-547392 -- exec busybox-58667487b6-zrwsm -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-547392 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-547392 -v=7 --alsologtostderr: (56.221643818s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-547392 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 cp testdata/cp-test.txt ha-547392:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 ssh -n ha-547392 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 cp ha-547392:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4110523383/001/cp-test_ha-547392.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 ssh -n ha-547392 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 cp ha-547392:/home/docker/cp-test.txt ha-547392-m02:/home/docker/cp-test_ha-547392_ha-547392-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 ssh -n ha-547392 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 ssh -n ha-547392-m02 "sudo cat /home/docker/cp-test_ha-547392_ha-547392-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 cp ha-547392:/home/docker/cp-test.txt ha-547392-m03:/home/docker/cp-test_ha-547392_ha-547392-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 ssh -n ha-547392 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 ssh -n ha-547392-m03 "sudo cat /home/docker/cp-test_ha-547392_ha-547392-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 cp ha-547392:/home/docker/cp-test.txt ha-547392-m04:/home/docker/cp-test_ha-547392_ha-547392-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 ssh -n ha-547392 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 ssh -n ha-547392-m04 "sudo cat /home/docker/cp-test_ha-547392_ha-547392-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 cp testdata/cp-test.txt ha-547392-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 ssh -n ha-547392-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 cp ha-547392-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4110523383/001/cp-test_ha-547392-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 ssh -n ha-547392-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 cp ha-547392-m02:/home/docker/cp-test.txt ha-547392:/home/docker/cp-test_ha-547392-m02_ha-547392.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 ssh -n ha-547392-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 ssh -n ha-547392 "sudo cat /home/docker/cp-test_ha-547392-m02_ha-547392.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 cp ha-547392-m02:/home/docker/cp-test.txt ha-547392-m03:/home/docker/cp-test_ha-547392-m02_ha-547392-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 ssh -n ha-547392-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 ssh -n ha-547392-m03 "sudo cat /home/docker/cp-test_ha-547392-m02_ha-547392-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 cp ha-547392-m02:/home/docker/cp-test.txt ha-547392-m04:/home/docker/cp-test_ha-547392-m02_ha-547392-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 ssh -n ha-547392-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 ssh -n ha-547392-m04 "sudo cat /home/docker/cp-test_ha-547392-m02_ha-547392-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 cp testdata/cp-test.txt ha-547392-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 ssh -n ha-547392-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 cp ha-547392-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4110523383/001/cp-test_ha-547392-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 ssh -n ha-547392-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 cp ha-547392-m03:/home/docker/cp-test.txt ha-547392:/home/docker/cp-test_ha-547392-m03_ha-547392.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 ssh -n ha-547392-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 ssh -n ha-547392 "sudo cat /home/docker/cp-test_ha-547392-m03_ha-547392.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 cp ha-547392-m03:/home/docker/cp-test.txt ha-547392-m02:/home/docker/cp-test_ha-547392-m03_ha-547392-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 ssh -n ha-547392-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 ssh -n ha-547392-m02 "sudo cat /home/docker/cp-test_ha-547392-m03_ha-547392-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 cp ha-547392-m03:/home/docker/cp-test.txt ha-547392-m04:/home/docker/cp-test_ha-547392-m03_ha-547392-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 ssh -n ha-547392-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 ssh -n ha-547392-m04 "sudo cat /home/docker/cp-test_ha-547392-m03_ha-547392-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 cp testdata/cp-test.txt ha-547392-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 ssh -n ha-547392-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 cp ha-547392-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4110523383/001/cp-test_ha-547392-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 ssh -n ha-547392-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 cp ha-547392-m04:/home/docker/cp-test.txt ha-547392:/home/docker/cp-test_ha-547392-m04_ha-547392.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 ssh -n ha-547392-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 ssh -n ha-547392 "sudo cat /home/docker/cp-test_ha-547392-m04_ha-547392.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 cp ha-547392-m04:/home/docker/cp-test.txt ha-547392-m02:/home/docker/cp-test_ha-547392-m04_ha-547392-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 ssh -n ha-547392-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 ssh -n ha-547392-m02 "sudo cat /home/docker/cp-test_ha-547392-m04_ha-547392-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 cp ha-547392-m04:/home/docker/cp-test.txt ha-547392-m03:/home/docker/cp-test_ha-547392-m04_ha-547392-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 ssh -n ha-547392-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 ssh -n ha-547392-m03 "sudo cat /home/docker/cp-test_ha-547392-m04_ha-547392-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 node stop m02 -v=7 --alsologtostderr
E0407 12:46:09.343919 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/functional-728898/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:46:09.350449 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/functional-728898/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:46:09.361988 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/functional-728898/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:46:09.383765 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/functional-728898/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:46:09.425348 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/functional-728898/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:46:09.507000 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/functional-728898/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:46:09.668656 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/functional-728898/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:46:09.990417 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/functional-728898/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:46:10.632713 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/functional-728898/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:46:11.914301 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/functional-728898/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:46:14.476159 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/functional-728898/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:46:19.598123 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/functional-728898/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:46:29.839764 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/functional-728898/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:46:50.321209 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/functional-728898/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:47:04.915710 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:47:31.282772 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/functional-728898/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-547392 node stop m02 -v=7 --alsologtostderr: (1m31.039039166s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-547392 status -v=7 --alsologtostderr: exit status 7 (668.84771ms)

                                                
                                                
-- stdout --
	ha-547392
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-547392-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-547392-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-547392-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 12:47:40.169388 1187570 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:47:40.169525 1187570 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:47:40.169532 1187570 out.go:358] Setting ErrFile to fd 2...
	I0407 12:47:40.169535 1187570 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:47:40.169834 1187570 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1162386/.minikube/bin
	I0407 12:47:40.170047 1187570 out.go:352] Setting JSON to false
	I0407 12:47:40.170084 1187570 mustload.go:65] Loading cluster: ha-547392
	I0407 12:47:40.170239 1187570 notify.go:220] Checking for updates...
	I0407 12:47:40.170571 1187570 config.go:182] Loaded profile config "ha-547392": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 12:47:40.170600 1187570 status.go:174] checking status of ha-547392 ...
	I0407 12:47:40.171104 1187570 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:47:40.171161 1187570 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:47:40.193782 1187570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43549
	I0407 12:47:40.194476 1187570 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:47:40.195127 1187570 main.go:141] libmachine: Using API Version  1
	I0407 12:47:40.195160 1187570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:47:40.195713 1187570 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:47:40.195909 1187570 main.go:141] libmachine: (ha-547392) Calling .GetState
	I0407 12:47:40.198062 1187570 status.go:371] ha-547392 host status = "Running" (err=<nil>)
	I0407 12:47:40.198085 1187570 host.go:66] Checking if "ha-547392" exists ...
	I0407 12:47:40.198414 1187570 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:47:40.198467 1187570 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:47:40.214327 1187570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45105
	I0407 12:47:40.214880 1187570 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:47:40.215477 1187570 main.go:141] libmachine: Using API Version  1
	I0407 12:47:40.215523 1187570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:47:40.215877 1187570 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:47:40.216193 1187570 main.go:141] libmachine: (ha-547392) Calling .GetIP
	I0407 12:47:40.219929 1187570 main.go:141] libmachine: (ha-547392) DBG | domain ha-547392 has defined MAC address 52:54:00:75:68:2f in network mk-ha-547392
	I0407 12:47:40.220544 1187570 main.go:141] libmachine: (ha-547392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:68:2f", ip: ""} in network mk-ha-547392: {Iface:virbr1 ExpiryTime:2025-04-07 13:41:46 +0000 UTC Type:0 Mac:52:54:00:75:68:2f Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ha-547392 Clientid:01:52:54:00:75:68:2f}
	I0407 12:47:40.220582 1187570 main.go:141] libmachine: (ha-547392) DBG | domain ha-547392 has defined IP address 192.168.39.20 and MAC address 52:54:00:75:68:2f in network mk-ha-547392
	I0407 12:47:40.220754 1187570 host.go:66] Checking if "ha-547392" exists ...
	I0407 12:47:40.221093 1187570 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:47:40.221151 1187570 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:47:40.237551 1187570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46079
	I0407 12:47:40.238396 1187570 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:47:40.239026 1187570 main.go:141] libmachine: Using API Version  1
	I0407 12:47:40.239055 1187570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:47:40.239453 1187570 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:47:40.239729 1187570 main.go:141] libmachine: (ha-547392) Calling .DriverName
	I0407 12:47:40.239954 1187570 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 12:47:40.240023 1187570 main.go:141] libmachine: (ha-547392) Calling .GetSSHHostname
	I0407 12:47:40.243503 1187570 main.go:141] libmachine: (ha-547392) DBG | domain ha-547392 has defined MAC address 52:54:00:75:68:2f in network mk-ha-547392
	I0407 12:47:40.244049 1187570 main.go:141] libmachine: (ha-547392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:68:2f", ip: ""} in network mk-ha-547392: {Iface:virbr1 ExpiryTime:2025-04-07 13:41:46 +0000 UTC Type:0 Mac:52:54:00:75:68:2f Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:ha-547392 Clientid:01:52:54:00:75:68:2f}
	I0407 12:47:40.244081 1187570 main.go:141] libmachine: (ha-547392) DBG | domain ha-547392 has defined IP address 192.168.39.20 and MAC address 52:54:00:75:68:2f in network mk-ha-547392
	I0407 12:47:40.244252 1187570 main.go:141] libmachine: (ha-547392) Calling .GetSSHPort
	I0407 12:47:40.244458 1187570 main.go:141] libmachine: (ha-547392) Calling .GetSSHKeyPath
	I0407 12:47:40.244624 1187570 main.go:141] libmachine: (ha-547392) Calling .GetSSHUsername
	I0407 12:47:40.244782 1187570 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/ha-547392/id_rsa Username:docker}
	I0407 12:47:40.330656 1187570 ssh_runner.go:195] Run: systemctl --version
	I0407 12:47:40.337617 1187570 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 12:47:40.355632 1187570 kubeconfig.go:125] found "ha-547392" server: "https://192.168.39.254:8443"
	I0407 12:47:40.355679 1187570 api_server.go:166] Checking apiserver status ...
	I0407 12:47:40.355720 1187570 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:47:40.371317 1187570 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1140/cgroup
	W0407 12:47:40.382259 1187570 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1140/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0407 12:47:40.382337 1187570 ssh_runner.go:195] Run: ls
	I0407 12:47:40.387576 1187570 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0407 12:47:40.392482 1187570 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0407 12:47:40.392523 1187570 status.go:463] ha-547392 apiserver status = Running (err=<nil>)
	I0407 12:47:40.392536 1187570 status.go:176] ha-547392 status: &{Name:ha-547392 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 12:47:40.392572 1187570 status.go:174] checking status of ha-547392-m02 ...
	I0407 12:47:40.393050 1187570 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:47:40.393134 1187570 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:47:40.409844 1187570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35223
	I0407 12:47:40.410427 1187570 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:47:40.410965 1187570 main.go:141] libmachine: Using API Version  1
	I0407 12:47:40.410996 1187570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:47:40.411399 1187570 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:47:40.411641 1187570 main.go:141] libmachine: (ha-547392-m02) Calling .GetState
	I0407 12:47:40.413559 1187570 status.go:371] ha-547392-m02 host status = "Stopped" (err=<nil>)
	I0407 12:47:40.413579 1187570 status.go:384] host is not running, skipping remaining checks
	I0407 12:47:40.413587 1187570 status.go:176] ha-547392-m02 status: &{Name:ha-547392-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 12:47:40.413612 1187570 status.go:174] checking status of ha-547392-m03 ...
	I0407 12:47:40.413979 1187570 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:47:40.414049 1187570 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:47:40.430351 1187570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39543
	I0407 12:47:40.431035 1187570 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:47:40.431670 1187570 main.go:141] libmachine: Using API Version  1
	I0407 12:47:40.431701 1187570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:47:40.432120 1187570 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:47:40.432350 1187570 main.go:141] libmachine: (ha-547392-m03) Calling .GetState
	I0407 12:47:40.434362 1187570 status.go:371] ha-547392-m03 host status = "Running" (err=<nil>)
	I0407 12:47:40.434383 1187570 host.go:66] Checking if "ha-547392-m03" exists ...
	I0407 12:47:40.434692 1187570 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:47:40.434736 1187570 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:47:40.451438 1187570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33265
	I0407 12:47:40.451979 1187570 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:47:40.452460 1187570 main.go:141] libmachine: Using API Version  1
	I0407 12:47:40.452486 1187570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:47:40.452890 1187570 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:47:40.453154 1187570 main.go:141] libmachine: (ha-547392-m03) Calling .GetIP
	I0407 12:47:40.456721 1187570 main.go:141] libmachine: (ha-547392-m03) DBG | domain ha-547392-m03 has defined MAC address 52:54:00:fa:23:82 in network mk-ha-547392
	I0407 12:47:40.457185 1187570 main.go:141] libmachine: (ha-547392-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:23:82", ip: ""} in network mk-ha-547392: {Iface:virbr1 ExpiryTime:2025-04-07 13:43:46 +0000 UTC Type:0 Mac:52:54:00:fa:23:82 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-547392-m03 Clientid:01:52:54:00:fa:23:82}
	I0407 12:47:40.457211 1187570 main.go:141] libmachine: (ha-547392-m03) DBG | domain ha-547392-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:fa:23:82 in network mk-ha-547392
	I0407 12:47:40.457464 1187570 host.go:66] Checking if "ha-547392-m03" exists ...
	I0407 12:47:40.457879 1187570 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:47:40.457941 1187570 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:47:40.474287 1187570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42863
	I0407 12:47:40.474760 1187570 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:47:40.475238 1187570 main.go:141] libmachine: Using API Version  1
	I0407 12:47:40.475266 1187570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:47:40.475646 1187570 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:47:40.475856 1187570 main.go:141] libmachine: (ha-547392-m03) Calling .DriverName
	I0407 12:47:40.476096 1187570 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 12:47:40.476120 1187570 main.go:141] libmachine: (ha-547392-m03) Calling .GetSSHHostname
	I0407 12:47:40.479704 1187570 main.go:141] libmachine: (ha-547392-m03) DBG | domain ha-547392-m03 has defined MAC address 52:54:00:fa:23:82 in network mk-ha-547392
	I0407 12:47:40.480281 1187570 main.go:141] libmachine: (ha-547392-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:23:82", ip: ""} in network mk-ha-547392: {Iface:virbr1 ExpiryTime:2025-04-07 13:43:46 +0000 UTC Type:0 Mac:52:54:00:fa:23:82 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-547392-m03 Clientid:01:52:54:00:fa:23:82}
	I0407 12:47:40.480304 1187570 main.go:141] libmachine: (ha-547392-m03) DBG | domain ha-547392-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:fa:23:82 in network mk-ha-547392
	I0407 12:47:40.480603 1187570 main.go:141] libmachine: (ha-547392-m03) Calling .GetSSHPort
	I0407 12:47:40.480848 1187570 main.go:141] libmachine: (ha-547392-m03) Calling .GetSSHKeyPath
	I0407 12:47:40.481075 1187570 main.go:141] libmachine: (ha-547392-m03) Calling .GetSSHUsername
	I0407 12:47:40.481275 1187570 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/ha-547392-m03/id_rsa Username:docker}
	I0407 12:47:40.561232 1187570 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 12:47:40.576206 1187570 kubeconfig.go:125] found "ha-547392" server: "https://192.168.39.254:8443"
	I0407 12:47:40.576253 1187570 api_server.go:166] Checking apiserver status ...
	I0407 12:47:40.576302 1187570 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:47:40.591052 1187570 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1462/cgroup
	W0407 12:47:40.607091 1187570 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1462/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0407 12:47:40.607207 1187570 ssh_runner.go:195] Run: ls
	I0407 12:47:40.613114 1187570 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0407 12:47:40.617977 1187570 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0407 12:47:40.618010 1187570 status.go:463] ha-547392-m03 apiserver status = Running (err=<nil>)
	I0407 12:47:40.618019 1187570 status.go:176] ha-547392-m03 status: &{Name:ha-547392-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 12:47:40.618036 1187570 status.go:174] checking status of ha-547392-m04 ...
	I0407 12:47:40.618402 1187570 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:47:40.618453 1187570 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:47:40.634633 1187570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38437
	I0407 12:47:40.635184 1187570 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:47:40.635676 1187570 main.go:141] libmachine: Using API Version  1
	I0407 12:47:40.635701 1187570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:47:40.636141 1187570 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:47:40.636379 1187570 main.go:141] libmachine: (ha-547392-m04) Calling .GetState
	I0407 12:47:40.638203 1187570 status.go:371] ha-547392-m04 host status = "Running" (err=<nil>)
	I0407 12:47:40.638237 1187570 host.go:66] Checking if "ha-547392-m04" exists ...
	I0407 12:47:40.638519 1187570 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:47:40.638563 1187570 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:47:40.657130 1187570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45053
	I0407 12:47:40.657626 1187570 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:47:40.658106 1187570 main.go:141] libmachine: Using API Version  1
	I0407 12:47:40.658133 1187570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:47:40.658553 1187570 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:47:40.658784 1187570 main.go:141] libmachine: (ha-547392-m04) Calling .GetIP
	I0407 12:47:40.662760 1187570 main.go:141] libmachine: (ha-547392-m04) DBG | domain ha-547392-m04 has defined MAC address 52:54:00:7c:69:49 in network mk-ha-547392
	I0407 12:47:40.663318 1187570 main.go:141] libmachine: (ha-547392-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:69:49", ip: ""} in network mk-ha-547392: {Iface:virbr1 ExpiryTime:2025-04-07 13:45:12 +0000 UTC Type:0 Mac:52:54:00:7c:69:49 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-547392-m04 Clientid:01:52:54:00:7c:69:49}
	I0407 12:47:40.663348 1187570 main.go:141] libmachine: (ha-547392-m04) DBG | domain ha-547392-m04 has defined IP address 192.168.39.188 and MAC address 52:54:00:7c:69:49 in network mk-ha-547392
	I0407 12:47:40.663623 1187570 host.go:66] Checking if "ha-547392-m04" exists ...
	I0407 12:47:40.664055 1187570 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:47:40.664113 1187570 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:47:40.679995 1187570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45055
	I0407 12:47:40.680570 1187570 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:47:40.681095 1187570 main.go:141] libmachine: Using API Version  1
	I0407 12:47:40.681122 1187570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:47:40.681477 1187570 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:47:40.681744 1187570 main.go:141] libmachine: (ha-547392-m04) Calling .DriverName
	I0407 12:47:40.681955 1187570 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 12:47:40.681978 1187570 main.go:141] libmachine: (ha-547392-m04) Calling .GetSSHHostname
	I0407 12:47:40.685273 1187570 main.go:141] libmachine: (ha-547392-m04) DBG | domain ha-547392-m04 has defined MAC address 52:54:00:7c:69:49 in network mk-ha-547392
	I0407 12:47:40.685756 1187570 main.go:141] libmachine: (ha-547392-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:69:49", ip: ""} in network mk-ha-547392: {Iface:virbr1 ExpiryTime:2025-04-07 13:45:12 +0000 UTC Type:0 Mac:52:54:00:7c:69:49 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-547392-m04 Clientid:01:52:54:00:7c:69:49}
	I0407 12:47:40.685783 1187570 main.go:141] libmachine: (ha-547392-m04) DBG | domain ha-547392-m04 has defined IP address 192.168.39.188 and MAC address 52:54:00:7c:69:49 in network mk-ha-547392
	I0407 12:47:40.685985 1187570 main.go:141] libmachine: (ha-547392-m04) Calling .GetSSHPort
	I0407 12:47:40.686226 1187570 main.go:141] libmachine: (ha-547392-m04) Calling .GetSSHKeyPath
	I0407 12:47:40.686404 1187570 main.go:141] libmachine: (ha-547392-m04) Calling .GetSSHUsername
	I0407 12:47:40.686559 1187570 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/ha-547392-m04/id_rsa Username:docker}
	I0407 12:47:40.767203 1187570 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 12:47:40.783394 1187570 status.go:176] ha-547392-m04 status: &{Name:ha-547392-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (47.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-547392 node start m02 -v=7 --alsologtostderr: (46.82977816s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (47.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (437.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-547392 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-547392 -v=7 --alsologtostderr
E0407 12:48:53.205013 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/functional-728898/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:51:09.344416 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/functional-728898/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:51:37.046826 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/functional-728898/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:52:04.915480 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-547392 -v=7 --alsologtostderr: (4m34.332896639s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-547392 --wait=true -v=7 --alsologtostderr
E0407 12:55:08.001041 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-547392 --wait=true -v=7 --alsologtostderr: (2m42.66596786s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-547392
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (437.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-547392 node delete m03 -v=7 --alsologtostderr: (17.698575317s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (273.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 stop -v=7 --alsologtostderr
E0407 12:56:09.344226 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/functional-728898/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:57:04.915214 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-547392 stop -v=7 --alsologtostderr: (4m32.954012168s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-547392 status -v=7 --alsologtostderr: exit status 7 (125.284057ms)

                                                
                                                
-- stdout --
	ha-547392
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-547392-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-547392-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:00:39.501781 1191769 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:00:39.502194 1191769 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:00:39.502207 1191769 out.go:358] Setting ErrFile to fd 2...
	I0407 13:00:39.502212 1191769 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:00:39.502424 1191769 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1162386/.minikube/bin
	I0407 13:00:39.502623 1191769 out.go:352] Setting JSON to false
	I0407 13:00:39.502660 1191769 mustload.go:65] Loading cluster: ha-547392
	I0407 13:00:39.502754 1191769 notify.go:220] Checking for updates...
	I0407 13:00:39.503069 1191769 config.go:182] Loaded profile config "ha-547392": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:00:39.503095 1191769 status.go:174] checking status of ha-547392 ...
	I0407 13:00:39.503544 1191769 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:00:39.503596 1191769 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:00:39.524786 1191769 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36083
	I0407 13:00:39.525553 1191769 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:00:39.526391 1191769 main.go:141] libmachine: Using API Version  1
	I0407 13:00:39.526418 1191769 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:00:39.526957 1191769 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:00:39.527235 1191769 main.go:141] libmachine: (ha-547392) Calling .GetState
	I0407 13:00:39.529626 1191769 status.go:371] ha-547392 host status = "Stopped" (err=<nil>)
	I0407 13:00:39.529647 1191769 status.go:384] host is not running, skipping remaining checks
	I0407 13:00:39.529654 1191769 status.go:176] ha-547392 status: &{Name:ha-547392 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:00:39.529723 1191769 status.go:174] checking status of ha-547392-m02 ...
	I0407 13:00:39.530086 1191769 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:00:39.530168 1191769 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:00:39.546763 1191769 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37935
	I0407 13:00:39.547282 1191769 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:00:39.547812 1191769 main.go:141] libmachine: Using API Version  1
	I0407 13:00:39.547862 1191769 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:00:39.548395 1191769 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:00:39.548712 1191769 main.go:141] libmachine: (ha-547392-m02) Calling .GetState
	I0407 13:00:39.550940 1191769 status.go:371] ha-547392-m02 host status = "Stopped" (err=<nil>)
	I0407 13:00:39.550964 1191769 status.go:384] host is not running, skipping remaining checks
	I0407 13:00:39.550974 1191769 status.go:176] ha-547392-m02 status: &{Name:ha-547392-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:00:39.551010 1191769 status.go:174] checking status of ha-547392-m04 ...
	I0407 13:00:39.551379 1191769 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:00:39.551433 1191769 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:00:39.569228 1191769 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39535
	I0407 13:00:39.569817 1191769 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:00:39.570421 1191769 main.go:141] libmachine: Using API Version  1
	I0407 13:00:39.570442 1191769 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:00:39.570870 1191769 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:00:39.571128 1191769 main.go:141] libmachine: (ha-547392-m04) Calling .GetState
	I0407 13:00:39.573017 1191769 status.go:371] ha-547392-m04 host status = "Stopped" (err=<nil>)
	I0407 13:00:39.573034 1191769 status.go:384] host is not running, skipping remaining checks
	I0407 13:00:39.573041 1191769 status.go:176] ha-547392-m04 status: &{Name:ha-547392-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (273.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (126.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-547392 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0407 13:01:09.343798 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/functional-728898/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:02:04.916070 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:02:32.408948 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/functional-728898/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-547392 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m5.477586128s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (126.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (77.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-547392 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-547392 --control-plane -v=7 --alsologtostderr: (1m17.043952835s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-547392 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (77.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                    
x
+
TestJSONOutput/start/Command (55.07s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-795567 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-795567 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (55.070899628s)
--- PASS: TestJSONOutput/start/Command (55.07s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-795567 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-795567 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.36s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-795567 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-795567 --output=json --user=testUser: (7.362582838s)
--- PASS: TestJSONOutput/stop/Command (7.36s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-477225 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-477225 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (74.240587ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"dcdc84c5-b343-4e49-abae-8b154e378f4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-477225] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2642b956-cffa-439c-bb7c-8d34bc7ef147","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20602"}}
	{"specversion":"1.0","id":"3a2345e7-b43d-40d1-a028-144e96d427b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"84834c7c-54b6-465b-98c1-6aa97c52bbda","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20602-1162386/kubeconfig"}}
	{"specversion":"1.0","id":"a20d7192-8d5b-4a6e-8d94-f95440ff46bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1162386/.minikube"}}
	{"specversion":"1.0","id":"2ed8a745-e719-4c4b-a49c-223b7376ff6c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"25878e19-844a-4b35-b5ba-963b98675613","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5179f8be-d69f-4a42-a33b-f71bd70ca53e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-477225" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-477225
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (93.49s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-496699 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-496699 --driver=kvm2  --container-runtime=crio: (43.078596047s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-508845 --driver=kvm2  --container-runtime=crio
E0407 13:06:09.346747 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/functional-728898/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-508845 --driver=kvm2  --container-runtime=crio: (47.549429854s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-496699
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-508845
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-508845" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-508845
helpers_test.go:175: Cleaning up "first-496699" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-496699
--- PASS: TestMinikubeProfile (93.49s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-802159 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0407 13:07:04.918526 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-802159 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.155326313s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-802159 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-802159 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.65s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-822465 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-822465 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.64866823s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-822465 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-822465 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-802159 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-802159 --alsologtostderr -v=5: (1.208221644s)
--- PASS: TestMountStart/serial/DeleteFirst (1.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-822465 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-822465 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-822465
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-822465: (1.291226111s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.45s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-822465
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-822465: (21.445579823s)
--- PASS: TestMountStart/serial/RestartStopped (22.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-822465 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-822465 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (114.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-522935 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-522935 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m53.60982399s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (114.08s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-522935 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-522935 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-522935 -- rollout status deployment/busybox: (4.890409511s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-522935 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-522935 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-522935 -- exec busybox-58667487b6-jbsh5 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-522935 -- exec busybox-58667487b6-nr4w2 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-522935 -- exec busybox-58667487b6-jbsh5 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-522935 -- exec busybox-58667487b6-nr4w2 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-522935 -- exec busybox-58667487b6-jbsh5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-522935 -- exec busybox-58667487b6-nr4w2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.48s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-522935 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-522935 -- exec busybox-58667487b6-jbsh5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-522935 -- exec busybox-58667487b6-jbsh5 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-522935 -- exec busybox-58667487b6-nr4w2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-522935 -- exec busybox-58667487b6-nr4w2 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (52.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-522935 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-522935 -v 3 --alsologtostderr: (52.056265289s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (52.69s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-522935 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 cp testdata/cp-test.txt multinode-522935:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 ssh -n multinode-522935 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 cp multinode-522935:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3886240375/001/cp-test_multinode-522935.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 ssh -n multinode-522935 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 cp multinode-522935:/home/docker/cp-test.txt multinode-522935-m02:/home/docker/cp-test_multinode-522935_multinode-522935-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 ssh -n multinode-522935 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 ssh -n multinode-522935-m02 "sudo cat /home/docker/cp-test_multinode-522935_multinode-522935-m02.txt"
E0407 13:11:09.344254 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/functional-728898/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 cp multinode-522935:/home/docker/cp-test.txt multinode-522935-m03:/home/docker/cp-test_multinode-522935_multinode-522935-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 ssh -n multinode-522935 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 ssh -n multinode-522935-m03 "sudo cat /home/docker/cp-test_multinode-522935_multinode-522935-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 cp testdata/cp-test.txt multinode-522935-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 ssh -n multinode-522935-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 cp multinode-522935-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3886240375/001/cp-test_multinode-522935-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 ssh -n multinode-522935-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 cp multinode-522935-m02:/home/docker/cp-test.txt multinode-522935:/home/docker/cp-test_multinode-522935-m02_multinode-522935.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 ssh -n multinode-522935-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 ssh -n multinode-522935 "sudo cat /home/docker/cp-test_multinode-522935-m02_multinode-522935.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 cp multinode-522935-m02:/home/docker/cp-test.txt multinode-522935-m03:/home/docker/cp-test_multinode-522935-m02_multinode-522935-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 ssh -n multinode-522935-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 ssh -n multinode-522935-m03 "sudo cat /home/docker/cp-test_multinode-522935-m02_multinode-522935-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 cp testdata/cp-test.txt multinode-522935-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 ssh -n multinode-522935-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 cp multinode-522935-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3886240375/001/cp-test_multinode-522935-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 ssh -n multinode-522935-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 cp multinode-522935-m03:/home/docker/cp-test.txt multinode-522935:/home/docker/cp-test_multinode-522935-m03_multinode-522935.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 ssh -n multinode-522935-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 ssh -n multinode-522935 "sudo cat /home/docker/cp-test_multinode-522935-m03_multinode-522935.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 cp multinode-522935-m03:/home/docker/cp-test.txt multinode-522935-m02:/home/docker/cp-test_multinode-522935-m03_multinode-522935-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 ssh -n multinode-522935-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 ssh -n multinode-522935-m02 "sudo cat /home/docker/cp-test_multinode-522935-m03_multinode-522935-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.94s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-522935 node stop m03: (1.460102631s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-522935 status: exit status 7 (473.054531ms)

                                                
                                                
-- stdout --
	multinode-522935
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-522935-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-522935-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-522935 status --alsologtostderr: exit status 7 (508.909865ms)

                                                
                                                
-- stdout --
	multinode-522935
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-522935-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-522935-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:11:17.000433 1199525 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:11:17.000615 1199525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:11:17.000627 1199525 out.go:358] Setting ErrFile to fd 2...
	I0407 13:11:17.000631 1199525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:11:17.000839 1199525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1162386/.minikube/bin
	I0407 13:11:17.001041 1199525 out.go:352] Setting JSON to false
	I0407 13:11:17.001080 1199525 mustload.go:65] Loading cluster: multinode-522935
	I0407 13:11:17.001152 1199525 notify.go:220] Checking for updates...
	I0407 13:11:17.001605 1199525 config.go:182] Loaded profile config "multinode-522935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:11:17.001633 1199525 status.go:174] checking status of multinode-522935 ...
	I0407 13:11:17.002150 1199525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:11:17.002208 1199525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:11:17.022660 1199525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46845
	I0407 13:11:17.023256 1199525 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:11:17.023809 1199525 main.go:141] libmachine: Using API Version  1
	I0407 13:11:17.023833 1199525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:11:17.024290 1199525 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:11:17.024576 1199525 main.go:141] libmachine: (multinode-522935) Calling .GetState
	I0407 13:11:17.026795 1199525 status.go:371] multinode-522935 host status = "Running" (err=<nil>)
	I0407 13:11:17.026816 1199525 host.go:66] Checking if "multinode-522935" exists ...
	I0407 13:11:17.027193 1199525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:11:17.027264 1199525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:11:17.045170 1199525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34663
	I0407 13:11:17.045834 1199525 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:11:17.046452 1199525 main.go:141] libmachine: Using API Version  1
	I0407 13:11:17.046489 1199525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:11:17.047021 1199525 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:11:17.047320 1199525 main.go:141] libmachine: (multinode-522935) Calling .GetIP
	I0407 13:11:17.052934 1199525 main.go:141] libmachine: (multinode-522935) DBG | domain multinode-522935 has defined MAC address 52:54:00:21:fd:3f in network mk-multinode-522935
	I0407 13:11:17.053966 1199525 main.go:141] libmachine: (multinode-522935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:fd:3f", ip: ""} in network mk-multinode-522935: {Iface:virbr1 ExpiryTime:2025-04-07 14:08:26 +0000 UTC Type:0 Mac:52:54:00:21:fd:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-522935 Clientid:01:52:54:00:21:fd:3f}
	I0407 13:11:17.053999 1199525 main.go:141] libmachine: (multinode-522935) DBG | domain multinode-522935 has defined IP address 192.168.39.215 and MAC address 52:54:00:21:fd:3f in network mk-multinode-522935
	I0407 13:11:17.054419 1199525 host.go:66] Checking if "multinode-522935" exists ...
	I0407 13:11:17.054976 1199525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:11:17.055135 1199525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:11:17.078044 1199525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41493
	I0407 13:11:17.078738 1199525 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:11:17.079352 1199525 main.go:141] libmachine: Using API Version  1
	I0407 13:11:17.079384 1199525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:11:17.079921 1199525 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:11:17.080190 1199525 main.go:141] libmachine: (multinode-522935) Calling .DriverName
	I0407 13:11:17.080526 1199525 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 13:11:17.080573 1199525 main.go:141] libmachine: (multinode-522935) Calling .GetSSHHostname
	I0407 13:11:17.085317 1199525 main.go:141] libmachine: (multinode-522935) DBG | domain multinode-522935 has defined MAC address 52:54:00:21:fd:3f in network mk-multinode-522935
	I0407 13:11:17.086298 1199525 main.go:141] libmachine: (multinode-522935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:fd:3f", ip: ""} in network mk-multinode-522935: {Iface:virbr1 ExpiryTime:2025-04-07 14:08:26 +0000 UTC Type:0 Mac:52:54:00:21:fd:3f Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-522935 Clientid:01:52:54:00:21:fd:3f}
	I0407 13:11:17.086335 1199525 main.go:141] libmachine: (multinode-522935) DBG | domain multinode-522935 has defined IP address 192.168.39.215 and MAC address 52:54:00:21:fd:3f in network mk-multinode-522935
	I0407 13:11:17.086931 1199525 main.go:141] libmachine: (multinode-522935) Calling .GetSSHPort
	I0407 13:11:17.087384 1199525 main.go:141] libmachine: (multinode-522935) Calling .GetSSHKeyPath
	I0407 13:11:17.087913 1199525 main.go:141] libmachine: (multinode-522935) Calling .GetSSHUsername
	I0407 13:11:17.088197 1199525 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/multinode-522935/id_rsa Username:docker}
	I0407 13:11:17.187868 1199525 ssh_runner.go:195] Run: systemctl --version
	I0407 13:11:17.195591 1199525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:11:17.214066 1199525 kubeconfig.go:125] found "multinode-522935" server: "https://192.168.39.215:8443"
	I0407 13:11:17.214115 1199525 api_server.go:166] Checking apiserver status ...
	I0407 13:11:17.214156 1199525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:11:17.231787 1199525 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1107/cgroup
	W0407 13:11:17.243633 1199525 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1107/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0407 13:11:17.243703 1199525 ssh_runner.go:195] Run: ls
	I0407 13:11:17.248881 1199525 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8443/healthz ...
	I0407 13:11:17.254354 1199525 api_server.go:279] https://192.168.39.215:8443/healthz returned 200:
	ok
	I0407 13:11:17.254395 1199525 status.go:463] multinode-522935 apiserver status = Running (err=<nil>)
	I0407 13:11:17.254411 1199525 status.go:176] multinode-522935 status: &{Name:multinode-522935 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:11:17.254434 1199525 status.go:174] checking status of multinode-522935-m02 ...
	I0407 13:11:17.254801 1199525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:11:17.254851 1199525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:11:17.272445 1199525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34759
	I0407 13:11:17.273037 1199525 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:11:17.273699 1199525 main.go:141] libmachine: Using API Version  1
	I0407 13:11:17.273749 1199525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:11:17.274243 1199525 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:11:17.274486 1199525 main.go:141] libmachine: (multinode-522935-m02) Calling .GetState
	I0407 13:11:17.276826 1199525 status.go:371] multinode-522935-m02 host status = "Running" (err=<nil>)
	I0407 13:11:17.276856 1199525 host.go:66] Checking if "multinode-522935-m02" exists ...
	I0407 13:11:17.277196 1199525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:11:17.277282 1199525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:11:17.295820 1199525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33869
	I0407 13:11:17.296422 1199525 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:11:17.296985 1199525 main.go:141] libmachine: Using API Version  1
	I0407 13:11:17.297015 1199525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:11:17.297339 1199525 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:11:17.297579 1199525 main.go:141] libmachine: (multinode-522935-m02) Calling .GetIP
	I0407 13:11:17.300770 1199525 main.go:141] libmachine: (multinode-522935-m02) DBG | domain multinode-522935-m02 has defined MAC address 52:54:00:1a:27:d9 in network mk-multinode-522935
	I0407 13:11:17.301278 1199525 main.go:141] libmachine: (multinode-522935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:27:d9", ip: ""} in network mk-multinode-522935: {Iface:virbr1 ExpiryTime:2025-04-07 14:09:32 +0000 UTC Type:0 Mac:52:54:00:1a:27:d9 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-522935-m02 Clientid:01:52:54:00:1a:27:d9}
	I0407 13:11:17.301312 1199525 main.go:141] libmachine: (multinode-522935-m02) DBG | domain multinode-522935-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:1a:27:d9 in network mk-multinode-522935
	I0407 13:11:17.301499 1199525 host.go:66] Checking if "multinode-522935-m02" exists ...
	I0407 13:11:17.301908 1199525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:11:17.301959 1199525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:11:17.320114 1199525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41827
	I0407 13:11:17.320758 1199525 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:11:17.321352 1199525 main.go:141] libmachine: Using API Version  1
	I0407 13:11:17.321379 1199525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:11:17.321779 1199525 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:11:17.322025 1199525 main.go:141] libmachine: (multinode-522935-m02) Calling .DriverName
	I0407 13:11:17.322260 1199525 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 13:11:17.322297 1199525 main.go:141] libmachine: (multinode-522935-m02) Calling .GetSSHHostname
	I0407 13:11:17.326104 1199525 main.go:141] libmachine: (multinode-522935-m02) DBG | domain multinode-522935-m02 has defined MAC address 52:54:00:1a:27:d9 in network mk-multinode-522935
	I0407 13:11:17.326623 1199525 main.go:141] libmachine: (multinode-522935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:27:d9", ip: ""} in network mk-multinode-522935: {Iface:virbr1 ExpiryTime:2025-04-07 14:09:32 +0000 UTC Type:0 Mac:52:54:00:1a:27:d9 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-522935-m02 Clientid:01:52:54:00:1a:27:d9}
	I0407 13:11:17.326662 1199525 main.go:141] libmachine: (multinode-522935-m02) DBG | domain multinode-522935-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:1a:27:d9 in network mk-multinode-522935
	I0407 13:11:17.326974 1199525 main.go:141] libmachine: (multinode-522935-m02) Calling .GetSSHPort
	I0407 13:11:17.327228 1199525 main.go:141] libmachine: (multinode-522935-m02) Calling .GetSSHKeyPath
	I0407 13:11:17.327428 1199525 main.go:141] libmachine: (multinode-522935-m02) Calling .GetSSHUsername
	I0407 13:11:17.327598 1199525 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1162386/.minikube/machines/multinode-522935-m02/id_rsa Username:docker}
	I0407 13:11:17.413894 1199525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:11:17.430388 1199525 status.go:176] multinode-522935-m02 status: &{Name:multinode-522935-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:11:17.430427 1199525 status.go:174] checking status of multinode-522935-m03 ...
	I0407 13:11:17.430798 1199525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:11:17.430850 1199525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:11:17.449174 1199525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41293
	I0407 13:11:17.449853 1199525 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:11:17.450370 1199525 main.go:141] libmachine: Using API Version  1
	I0407 13:11:17.450401 1199525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:11:17.450842 1199525 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:11:17.451138 1199525 main.go:141] libmachine: (multinode-522935-m03) Calling .GetState
	I0407 13:11:17.453359 1199525 status.go:371] multinode-522935-m03 host status = "Stopped" (err=<nil>)
	I0407 13:11:17.453384 1199525 status.go:384] host is not running, skipping remaining checks
	I0407 13:11:17.453390 1199525 status.go:176] multinode-522935-m03 status: &{Name:multinode-522935-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.44s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 node start m03 -v=7 --alsologtostderr
E0407 13:11:48.003615 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-522935 node start m03 -v=7 --alsologtostderr: (39.97725697s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (40.73s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (381.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-522935
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-522935
E0407 13:12:04.919564 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-522935: (3m3.584790928s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-522935 --wait=true -v=8 --alsologtostderr
E0407 13:16:09.344298 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/functional-728898/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:17:04.916042 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-522935 --wait=true -v=8 --alsologtostderr: (3m17.828094961s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-522935
--- PASS: TestMultiNode/serial/RestartKeepsNodes (381.53s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-522935 node delete m03: (2.268006147s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.90s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (182.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 stop
E0407 13:19:12.411063 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/functional-728898/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:21:09.347169 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/functional-728898/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-522935 stop: (3m1.918803275s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-522935 status: exit status 7 (107.589473ms)

                                                
                                                
-- stdout --
	multinode-522935
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-522935-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-522935 status --alsologtostderr: exit status 7 (107.865559ms)

                                                
                                                
-- stdout --
	multinode-522935
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-522935-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:21:24.694614 1203148 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:21:24.695205 1203148 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:21:24.695223 1203148 out.go:358] Setting ErrFile to fd 2...
	I0407 13:21:24.695231 1203148 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:21:24.695739 1203148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1162386/.minikube/bin
	I0407 13:21:24.696215 1203148 out.go:352] Setting JSON to false
	I0407 13:21:24.696365 1203148 mustload.go:65] Loading cluster: multinode-522935
	I0407 13:21:24.696581 1203148 notify.go:220] Checking for updates...
	I0407 13:21:24.697299 1203148 config.go:182] Loaded profile config "multinode-522935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:21:24.697342 1203148 status.go:174] checking status of multinode-522935 ...
	I0407 13:21:24.697811 1203148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:21:24.697887 1203148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:21:24.719150 1203148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39353
	I0407 13:21:24.719774 1203148 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:21:24.720554 1203148 main.go:141] libmachine: Using API Version  1
	I0407 13:21:24.720598 1203148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:21:24.721239 1203148 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:21:24.721509 1203148 main.go:141] libmachine: (multinode-522935) Calling .GetState
	I0407 13:21:24.723732 1203148 status.go:371] multinode-522935 host status = "Stopped" (err=<nil>)
	I0407 13:21:24.723757 1203148 status.go:384] host is not running, skipping remaining checks
	I0407 13:21:24.723764 1203148 status.go:176] multinode-522935 status: &{Name:multinode-522935 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:21:24.723784 1203148 status.go:174] checking status of multinode-522935-m02 ...
	I0407 13:21:24.724130 1203148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:21:24.724228 1203148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:21:24.740857 1203148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35371
	I0407 13:21:24.741624 1203148 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:21:24.742639 1203148 main.go:141] libmachine: Using API Version  1
	I0407 13:21:24.742678 1203148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:21:24.743345 1203148 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:21:24.744087 1203148 main.go:141] libmachine: (multinode-522935-m02) Calling .GetState
	I0407 13:21:24.746833 1203148 status.go:371] multinode-522935-m02 host status = "Stopped" (err=<nil>)
	I0407 13:21:24.746864 1203148 status.go:384] host is not running, skipping remaining checks
	I0407 13:21:24.746871 1203148 status.go:176] multinode-522935-m02 status: &{Name:multinode-522935-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (182.13s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (115.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-522935 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0407 13:22:04.915499 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-522935 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m55.364904008s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522935 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (115.98s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (48.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-522935
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-522935-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-522935-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (84.621821ms)

                                                
                                                
-- stdout --
	* [multinode-522935-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20602-1162386/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1162386/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-522935-m02' is duplicated with machine name 'multinode-522935-m02' in profile 'multinode-522935'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-522935-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-522935-m03 --driver=kvm2  --container-runtime=crio: (46.604289485s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-522935
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-522935: exit status 80 (230.256614ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-522935 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-522935-m03 already exists in multinode-522935-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-522935-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-522935-m03: (1.096244141s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (48.08s)

                                                
                                    
x
+
TestScheduledStopUnix (117.23s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-863923 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-863923 --memory=2048 --driver=kvm2  --container-runtime=crio: (45.363537s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-863923 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-863923 -n scheduled-stop-863923
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-863923 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0407 13:27:51.495996 1169716 retry.go:31] will retry after 126.074µs: open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/scheduled-stop-863923/pid: no such file or directory
I0407 13:27:51.497180 1169716 retry.go:31] will retry after 214.723µs: open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/scheduled-stop-863923/pid: no such file or directory
I0407 13:27:51.498389 1169716 retry.go:31] will retry after 325.976µs: open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/scheduled-stop-863923/pid: no such file or directory
I0407 13:27:51.499596 1169716 retry.go:31] will retry after 244.747µs: open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/scheduled-stop-863923/pid: no such file or directory
I0407 13:27:51.500811 1169716 retry.go:31] will retry after 644.604µs: open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/scheduled-stop-863923/pid: no such file or directory
I0407 13:27:51.502000 1169716 retry.go:31] will retry after 1.10823ms: open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/scheduled-stop-863923/pid: no such file or directory
I0407 13:27:51.504313 1169716 retry.go:31] will retry after 883.076µs: open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/scheduled-stop-863923/pid: no such file or directory
I0407 13:27:51.505547 1169716 retry.go:31] will retry after 1.688019ms: open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/scheduled-stop-863923/pid: no such file or directory
I0407 13:27:51.507917 1169716 retry.go:31] will retry after 2.637549ms: open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/scheduled-stop-863923/pid: no such file or directory
I0407 13:27:51.511203 1169716 retry.go:31] will retry after 2.35017ms: open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/scheduled-stop-863923/pid: no such file or directory
I0407 13:27:51.514554 1169716 retry.go:31] will retry after 7.238164ms: open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/scheduled-stop-863923/pid: no such file or directory
I0407 13:27:51.522966 1169716 retry.go:31] will retry after 8.579685ms: open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/scheduled-stop-863923/pid: no such file or directory
I0407 13:27:51.532349 1169716 retry.go:31] will retry after 12.642384ms: open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/scheduled-stop-863923/pid: no such file or directory
I0407 13:27:51.545844 1169716 retry.go:31] will retry after 14.611124ms: open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/scheduled-stop-863923/pid: no such file or directory
I0407 13:27:51.561304 1169716 retry.go:31] will retry after 43.761281ms: open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/scheduled-stop-863923/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-863923 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-863923 -n scheduled-stop-863923
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-863923
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-863923 --schedule 15s
E0407 13:28:28.006576 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-863923
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-863923: exit status 7 (81.538428ms)

                                                
                                                
-- stdout --
	scheduled-stop-863923
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-863923 -n scheduled-stop-863923
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-863923 -n scheduled-stop-863923: exit status 7 (82.027443ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-863923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-863923
--- PASS: TestScheduledStopUnix (117.23s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (219.14s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2438373587 start -p running-upgrade-046238 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2438373587 start -p running-upgrade-046238 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m8.06638361s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-046238 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-046238 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m29.27602617s)
helpers_test.go:175: Cleaning up "running-upgrade-046238" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-046238
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-046238: (1.240695534s)
--- PASS: TestRunningBinaryUpgrade (219.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-027070 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-027070 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (103.986339ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-027070] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20602-1162386/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1162386/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (99.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-027070 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-027070 --driver=kvm2  --container-runtime=crio: (1m39.376463622s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-027070 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (99.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-056871 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-056871 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (141.94816ms)

                                                
                                                
-- stdout --
	* [false-056871] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20602-1162386/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1162386/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:30:35.409564 1208518 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:30:35.409756 1208518 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:30:35.409772 1208518 out.go:358] Setting ErrFile to fd 2...
	I0407 13:30:35.409781 1208518 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:30:35.410134 1208518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1162386/.minikube/bin
	I0407 13:30:35.410974 1208518 out.go:352] Setting JSON to false
	I0407 13:30:35.412586 1208518 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":18780,"bootTime":1744013856,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 13:30:35.412690 1208518 start.go:139] virtualization: kvm guest
	I0407 13:30:35.415512 1208518 out.go:177] * [false-056871] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 13:30:35.417451 1208518 notify.go:220] Checking for updates...
	I0407 13:30:35.417489 1208518 out.go:177]   - MINIKUBE_LOCATION=20602
	I0407 13:30:35.419582 1208518 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 13:30:35.421187 1208518 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20602-1162386/kubeconfig
	I0407 13:30:35.422904 1208518 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1162386/.minikube
	I0407 13:30:35.424833 1208518 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 13:30:35.426628 1208518 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 13:30:35.429079 1208518 config.go:182] Loaded profile config "NoKubernetes-027070": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:30:35.429242 1208518 config.go:182] Loaded profile config "old-k8s-version-435730": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0407 13:30:35.429368 1208518 config.go:182] Loaded profile config "running-upgrade-046238": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0407 13:30:35.429508 1208518 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 13:30:35.476007 1208518 out.go:177] * Using the kvm2 driver based on user configuration
	I0407 13:30:35.478134 1208518 start.go:297] selected driver: kvm2
	I0407 13:30:35.478167 1208518 start.go:901] validating driver "kvm2" against <nil>
	I0407 13:30:35.478181 1208518 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 13:30:35.481532 1208518 out.go:201] 
	W0407 13:30:35.484760 1208518 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0407 13:30:35.486763 1208518 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-056871 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-056871

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-056871

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-056871

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-056871

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-056871

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-056871

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-056871

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-056871

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-056871

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-056871

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056871"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056871"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056871"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-056871

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056871"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056871"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-056871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-056871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-056871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-056871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-056871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-056871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-056871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-056871" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056871"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056871"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056871"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056871"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056871"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-056871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-056871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-056871" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056871"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056871"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056871"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056871"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056871"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-056871

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056871"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056871"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056871"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056871"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056871"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056871"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056871"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056871"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056871"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056871"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056871"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056871"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056871"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056871"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056871"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056871"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056871"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-056871"

                                                
                                                
----------------------- debugLogs end: false-056871 [took: 3.54110691s] --------------------------------
helpers_test.go:175: Cleaning up "false-056871" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-056871
--- PASS: TestNetworkPlugins/group/false (3.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-027070 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-027070 --no-kubernetes --driver=kvm2  --container-runtime=crio: (4.952009791s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-027070 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-027070 status -o json: exit status 2 (390.852266ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-027070","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-027070
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-027070: (2.202822337s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (52.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-027070 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0407 13:31:09.344250 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/functional-728898/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-027070 --no-kubernetes --driver=kvm2  --container-runtime=crio: (52.341310412s)
--- PASS: TestNoKubernetes/serial/Start (52.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-027070 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-027070 "sudo systemctl is-active --quiet service kubelet": exit status 1 (308.814151ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1.170338438s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (2.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-027070
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-027070: (1.32842614s)
--- PASS: TestNoKubernetes/serial/Stop (1.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (46.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-027070 --driver=kvm2  --container-runtime=crio
E0407 13:32:04.915037 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-027070 --driver=kvm2  --container-runtime=crio: (46.746077675s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (46.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-027070 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-027070 "sudo systemctl is-active --quiet service kubelet": exit status 1 (245.704211ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (118.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-028452 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-028452 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (1m58.365139122s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (118.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (69.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-931633 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-931633 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (1m9.123716951s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (69.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (12.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-028452 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7174a577-6080-46bd-92ba-e24d12610343] Pending
helpers_test.go:344: "busybox" [7174a577-6080-46bd-92ba-e24d12610343] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7174a577-6080-46bd-92ba-e24d12610343] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 12.004746398s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-028452 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (12.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-931633 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [397032cc-40bd-49dd-9175-392678067c74] Pending
helpers_test.go:344: "busybox" [397032cc-40bd-49dd-9175-392678067c74] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [397032cc-40bd-49dd-9175-392678067c74] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 12.007655806s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-931633 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-028452 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-028452 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.015156374s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-028452 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-028452 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-028452 --alsologtostderr -v=3: (1m31.096246879s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (6.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-435730 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-435730 --alsologtostderr -v=3: (6.310039883s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (6.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-435730 -n old-k8s-version-435730
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-435730 -n old-k8s-version-435730: exit status 7 (80.622764ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-435730 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-931633 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-931633 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.132477825s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-931633 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-931633 --alsologtostderr -v=3
E0407 13:35:52.413531 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/functional-728898/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:36:09.343586 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/functional-728898/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-931633 --alsologtostderr -v=3: (1m31.612759682s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-028452 -n no-preload-028452
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-028452 -n no-preload-028452: exit status 7 (90.451247ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-028452 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (396.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-028452 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-028452 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (6m35.895637028s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-028452 -n no-preload-028452
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (396.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-931633 -n embed-certs-931633
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-931633 -n embed-certs-931633: exit status 7 (96.950155ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-931633 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (352.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-931633 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-931633 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (5m51.998282584s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-931633 -n embed-certs-931633
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (352.37s)

                                                
                                    
x
+
TestPause/serial/Start (82.32s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-111763 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
E0407 13:37:04.915684 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/addons-660533/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-111763 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m22.320567764s)
--- PASS: TestPause/serial/Start (82.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-m2kk7" [337a1632-688f-44c3-9f0a-30dcdb5f4c14] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-m2kk7" [337a1632-688f-44c3-9f0a-30dcdb5f4c14] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.004800279s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-m2kk7" [337a1632-688f-44c3-9f0a-30dcdb5f4c14] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004893309s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-931633 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-931633 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-931633 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-931633 -n embed-certs-931633
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-931633 -n embed-certs-931633: exit status 2 (273.091704ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-931633 -n embed-certs-931633
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-931633 -n embed-certs-931633: exit status 2 (289.188502ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-931633 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-931633 -n embed-certs-931633
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-931633 -n embed-certs-931633
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (100.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2815407892 start -p stopped-upgrade-392390 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2815407892 start -p stopped-upgrade-392390 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (50.280386433s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2815407892 -p stopped-upgrade-392390 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2815407892 -p stopped-upgrade-392390 stop: (2.153259295s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-392390 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-392390 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (48.172176111s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (100.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-s85zq" [5105c060-7943-4943-9738-b6185669b60a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-s85zq" [5105c060-7943-4943-9738-b6185669b60a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.004930763s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-s85zq" [5105c060-7943-4943-9738-b6185669b60a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004304242s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-028452 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-028452 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-028452 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-028452 -n no-preload-028452
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-028452 -n no-preload-028452: exit status 2 (325.000026ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-028452 -n no-preload-028452
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-028452 -n no-preload-028452: exit status 2 (349.78621ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-028452 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-028452 -n no-preload-028452
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-028452 -n no-preload-028452
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-405061 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-405061 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (53.871550763s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-405061 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9e62e34a-53a5-4767-a0e5-a5c694288205] Pending
helpers_test.go:344: "busybox" [9e62e34a-53a5-4767-a0e5-a5c694288205] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9e62e34a-53a5-4767-a0e5-a5c694288205] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004233586s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-405061 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-405061 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-405061 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.070717543s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-405061 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-405061 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-405061 --alsologtostderr -v=3: (1m31.223833589s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-392390
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-392390: (1.180529671s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-896794 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-896794 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (49.310479133s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (49.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-896794 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-896794 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.504696321s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-896794 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-896794 --alsologtostderr -v=3: (7.364016313s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (58.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-056871 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-056871 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (58.098496907s)
--- PASS: TestNetworkPlugins/group/auto/Start (58.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-896794 -n newest-cni-896794
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-896794 -n newest-cni-896794: exit status 7 (84.527632ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-896794 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (57.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-896794 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-896794 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (56.816192108s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-896794 -n newest-cni-896794
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (57.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-405061 -n default-k8s-diff-port-405061
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-405061 -n default-k8s-diff-port-405061: exit status 7 (93.742366ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-405061 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (315.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-405061 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-405061 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (5m14.900165547s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-405061 -n default-k8s-diff-port-405061
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (315.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-056871 "pgrep -a kubelet"
I0407 13:46:22.237685 1169716 config.go:182] Loaded profile config "auto-056871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-056871 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-26nrc" [24e144e0-099f-42ba-b989-8ddf4359d58a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-26nrc" [24e144e0-099f-42ba-b989-8ddf4359d58a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.005585452s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-896794 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-896794 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-896794 -n newest-cni-896794
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-896794 -n newest-cni-896794: exit status 2 (276.017441ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-896794 -n newest-cni-896794
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-896794 -n newest-cni-896794: exit status 2 (294.566519ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-896794 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-896794 -n newest-cni-896794
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-896794 -n newest-cni-896794
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-056871 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-056871 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-056871 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (68.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-056871 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-056871 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m8.044337518s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (68.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (91.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-056871 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-056871 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m31.470027743s)
--- PASS: TestNetworkPlugins/group/calico/Start (91.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-bbxcx" [3b3df523-eadd-4260-ae41-248eebe4b7fe] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00457213s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-056871 "pgrep -a kubelet"
I0407 13:47:47.731923 1169716 config.go:182] Loaded profile config "kindnet-056871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-056871 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-6p4t4" [663d9feb-5b4b-4308-bc94-4015d60ad4a5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-6p4t4" [663d9feb-5b4b-4308-bc94-4015d60ad4a5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005583625s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-056871 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-056871 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-056871 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (77.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-056871 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-056871 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m17.30164247s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (77.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-4xr29" [35e0ab6d-8420-46ae-aa4c-5f5b03e244ea] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003804689s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-056871 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-056871 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-mlzc5" [f930440e-070f-4a33-abc0-492d805d141f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-mlzc5" [f930440e-070f-4a33-abc0-492d805d141f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.00459795s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-056871 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-056871 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-056871 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (59.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-056871 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-056871 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (59.746952811s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (59.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-056871 "pgrep -a kubelet"
I0407 13:49:32.710876 1169716 config.go:182] Loaded profile config "custom-flannel-056871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-056871 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-54srz" [4f99ab4a-6e24-429c-88c3-9ec33685b5a9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-54srz" [4f99ab4a-6e24-429c-88c3-9ec33685b5a9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004013785s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-056871 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-056871 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-056871 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-056871 "pgrep -a kubelet"
I0407 13:49:57.500827 1169716 config.go:182] Loaded profile config "enable-default-cni-056871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-056871 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-dt94r" [6e20856e-a220-4271-a802-e5534614d384] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-dt94r" [6e20856e-a220-4271-a802-e5534614d384] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004507821s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (72.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-056871 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-056871 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m12.844537635s)
--- PASS: TestNetworkPlugins/group/flannel/Start (72.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-056871 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-056871 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-056871 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (63.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-056871 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-056871 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m3.60533574s)
--- PASS: TestNetworkPlugins/group/bridge/Start (63.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-gs4nw" [b649d47f-6642-44f2-8935-a4b3f380f7ab] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004898975s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-5fdzf" [ba25aaa9-8785-4d40-952b-3fb6b09af2c2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004214042s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-gs4nw" [b649d47f-6642-44f2-8935-a4b3f380f7ab] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004275091s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-405061 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-056871 "pgrep -a kubelet"
I0407 13:51:20.154122 1169716 config.go:182] Loaded profile config "flannel-056871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-056871 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-v9jtc" [157a14e2-fb0f-4708-bf32-5d3ab3e3ec67] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-v9jtc" [157a14e2-fb0f-4708-bf32-5d3ab3e3ec67] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004439718s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-405061 image list --format=json
E0407 13:51:23.835894 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/auto-056871/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-405061 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-405061 -n default-k8s-diff-port-405061
E0407 13:51:25.117587 1169716 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/auto-056871/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-405061 -n default-k8s-diff-port-405061: exit status 2 (288.957701ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-405061 -n default-k8s-diff-port-405061
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-405061 -n default-k8s-diff-port-405061: exit status 2 (277.069333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-405061 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-405061 -n default-k8s-diff-port-405061
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-405061 -n default-k8s-diff-port-405061
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-056871 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-056871 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-9n4d7" [1726e57e-aa59-42dc-8778-ba3aeaf51cbd] Pending
helpers_test.go:344: "netcat-5d86dc444-9n4d7" [1726e57e-aa59-42dc-8778-ba3aeaf51cbd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-9n4d7" [1726e57e-aa59-42dc-8778-ba3aeaf51cbd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.004546392s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-056871 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-056871 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-056871 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-056871 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-056871 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-056871 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (35/322)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.34s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-660533 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.34s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-696615" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-696615
--- SKIP: TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:631: 
----------------------- debugLogs start: kubenet-056871 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-056871

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-056871

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-056871

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-056871

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-056871

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-056871

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-056871

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-056871

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-056871

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-056871

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056871"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056871"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056871"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-056871

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056871"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056871"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-056871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-056871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-056871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-056871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-056871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-056871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-056871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-056871" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056871"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056871"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056871"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056871"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056871"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-056871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-056871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-056871" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056871"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056871"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056871"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056871"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056871"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-056871

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056871"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056871"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056871"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056871"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056871"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056871"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056871"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056871"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056871"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056871"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056871"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056871"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056871"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056871"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056871"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056871"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056871"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-056871"

                                                
                                                
----------------------- debugLogs end: kubenet-056871 [took: 3.53233772s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-056871" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-056871
--- SKIP: TestNetworkPlugins/group/kubenet (3.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:631: 
----------------------- debugLogs start: cilium-056871 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-056871

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-056871

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-056871

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-056871

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-056871

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-056871

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-056871

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-056871

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-056871

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-056871

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056871"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056871"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056871"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-056871

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056871"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056871"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-056871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-056871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-056871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-056871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-056871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-056871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-056871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-056871" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056871"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056871"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056871"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056871"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056871"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-056871

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-056871

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-056871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-056871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-056871

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-056871

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-056871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-056871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-056871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-056871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-056871" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056871"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056871"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056871"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056871"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056871"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20602-1162386/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 07 Apr 2025 13:30:40 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.61.195:8443
name: NoKubernetes-027070
contexts:
- context:
cluster: NoKubernetes-027070
extensions:
- extension:
last-update: Mon, 07 Apr 2025 13:30:40 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: NoKubernetes-027070
name: NoKubernetes-027070
current-context: NoKubernetes-027070
kind: Config
preferences: {}
users:
- name: NoKubernetes-027070
user:
client-certificate: /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/NoKubernetes-027070/client.crt
client-key: /home/jenkins/minikube-integration/20602-1162386/.minikube/profiles/NoKubernetes-027070/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-056871

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056871"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056871"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056871"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056871"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056871"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056871"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056871"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056871"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056871"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056871"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056871"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056871"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056871"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056871"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056871"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056871"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056871"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-056871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-056871"

                                                
                                                
----------------------- debugLogs end: cilium-056871 [took: 4.14726306s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-056871" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-056871
--- SKIP: TestNetworkPlugins/group/cilium (4.31s)

                                                
                                    
Copied to clipboard